Feb 9 18:38:48.750558 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 9 18:38:48.750580 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 17:24:35 -00 2024 Feb 9 18:38:48.750588 kernel: efi: EFI v2.70 by EDK II Feb 9 18:38:48.750594 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Feb 9 18:38:48.750599 kernel: random: crng init done Feb 9 18:38:48.750604 kernel: ACPI: Early table checksum verification disabled Feb 9 18:38:48.750611 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Feb 9 18:38:48.750618 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 9 18:38:48.750624 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:38:48.750629 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:38:48.750635 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:38:48.750640 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:38:48.750646 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:38:48.750651 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:38:48.750660 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:38:48.750665 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:38:48.750672 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:38:48.750678 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 9 18:38:48.750684 kernel: NUMA: Failed to initialise from firmware Feb 9 18:38:48.750689 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 18:38:48.750695 kernel: NUMA: NODE_DATA [mem 0xdcb0a900-0xdcb0ffff] Feb 9 18:38:48.750701 kernel: Zone ranges: Feb 9 18:38:48.750707 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 18:38:48.750714 kernel: DMA32 empty Feb 9 18:38:48.750720 kernel: Normal empty Feb 9 18:38:48.750732 kernel: Movable zone start for each node Feb 9 18:38:48.750738 kernel: Early memory node ranges Feb 9 18:38:48.750744 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Feb 9 18:38:48.750750 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Feb 9 18:38:48.750756 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Feb 9 18:38:48.750762 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Feb 9 18:38:48.750768 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Feb 9 18:38:48.750774 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Feb 9 18:38:48.750781 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Feb 9 18:38:48.750787 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 18:38:48.750794 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 9 18:38:48.750800 kernel: psci: probing for conduit method from ACPI. Feb 9 18:38:48.750806 kernel: psci: PSCIv1.1 detected in firmware. Feb 9 18:38:48.750812 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 18:38:48.750819 kernel: psci: Trusted OS migration not required Feb 9 18:38:48.750827 kernel: psci: SMC Calling Convention v1.1 Feb 9 18:38:48.750834 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 9 18:38:48.750846 kernel: ACPI: SRAT not present Feb 9 18:38:48.750853 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 18:38:48.750859 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 18:38:48.750866 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 9 18:38:48.750872 kernel: Detected PIPT I-cache on CPU0 Feb 9 18:38:48.750878 kernel: CPU features: detected: GIC system register CPU interface Feb 9 18:38:48.750884 kernel: CPU features: detected: Hardware dirty bit management Feb 9 18:38:48.750890 kernel: CPU features: detected: Spectre-v4 Feb 9 18:38:48.750897 kernel: CPU features: detected: Spectre-BHB Feb 9 18:38:48.750904 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 18:38:48.750911 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 18:38:48.750917 kernel: CPU features: detected: ARM erratum 1418040 Feb 9 18:38:48.750923 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 9 18:38:48.750929 kernel: Policy zone: DMA Feb 9 18:38:48.750937 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=680ffc8c0dfb23738bd19ec96ea37b5bbadfb5cebf23767d1d52c89a6d5c00b4 Feb 9 18:38:48.750943 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 18:38:48.750950 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 18:38:48.750956 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 18:38:48.750963 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 18:38:48.750970 kernel: Memory: 2459148K/2572288K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 113140K reserved, 0K cma-reserved) Feb 9 18:38:48.750977 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 9 18:38:48.750983 kernel: trace event string verifier disabled Feb 9 18:38:48.750990 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 18:38:48.750996 kernel: rcu: RCU event tracing is enabled. Feb 9 18:38:48.751003 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 9 18:38:48.751009 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 18:38:48.751015 kernel: Tracing variant of Tasks RCU enabled. Feb 9 18:38:48.751022 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 18:38:48.751028 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 9 18:38:48.751034 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 18:38:48.751040 kernel: GICv3: 256 SPIs implemented Feb 9 18:38:48.751048 kernel: GICv3: 0 Extended SPIs implemented Feb 9 18:38:48.751054 kernel: GICv3: Distributor has no Range Selector support Feb 9 18:38:48.751060 kernel: Root IRQ handler: gic_handle_irq Feb 9 18:38:48.751066 kernel: GICv3: 16 PPIs implemented Feb 9 18:38:48.751073 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 9 18:38:48.751079 kernel: ACPI: SRAT not present Feb 9 18:38:48.751085 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 9 18:38:48.751091 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Feb 9 18:38:48.751098 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Feb 9 18:38:48.751104 kernel: GICv3: using LPI property table @0x00000000400d0000 Feb 9 18:38:48.751113 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Feb 9 18:38:48.751122 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:38:48.751130 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 9 18:38:48.751137 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 9 18:38:48.751143 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 9 18:38:48.751149 kernel: arm-pv: using stolen time PV Feb 9 18:38:48.751156 kernel: Console: colour dummy device 80x25 Feb 9 18:38:48.751162 kernel: ACPI: Core revision 20210730 Feb 9 18:38:48.751169 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 9 18:38:48.751175 kernel: pid_max: default: 32768 minimum: 301 Feb 9 18:38:48.751182 kernel: LSM: Security Framework initializing Feb 9 18:38:48.751188 kernel: SELinux: Initializing. Feb 9 18:38:48.751196 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 18:38:48.751202 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 18:38:48.751209 kernel: rcu: Hierarchical SRCU implementation. Feb 9 18:38:48.751215 kernel: Platform MSI: ITS@0x8080000 domain created Feb 9 18:38:48.751222 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 9 18:38:48.751228 kernel: Remapping and enabling EFI services. Feb 9 18:38:48.751234 kernel: smp: Bringing up secondary CPUs ... Feb 9 18:38:48.751241 kernel: Detected PIPT I-cache on CPU1 Feb 9 18:38:48.751247 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 9 18:38:48.751255 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Feb 9 18:38:48.751262 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:38:48.751268 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 9 18:38:48.751275 kernel: Detected PIPT I-cache on CPU2 Feb 9 18:38:48.751281 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 9 18:38:48.751288 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Feb 9 18:38:48.751294 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:38:48.751300 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 9 18:38:48.751307 kernel: Detected PIPT I-cache on CPU3 Feb 9 18:38:48.751313 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 9 18:38:48.751321 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Feb 9 18:38:48.751328 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:38:48.751334 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 9 18:38:48.751341 kernel: smp: Brought up 1 node, 4 CPUs Feb 9 18:38:48.751352 kernel: SMP: Total of 4 processors activated. Feb 9 18:38:48.751366 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 18:38:48.751373 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 9 18:38:48.751380 kernel: CPU features: detected: Common not Private translations Feb 9 18:38:48.751387 kernel: CPU features: detected: CRC32 instructions Feb 9 18:38:48.751393 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 9 18:38:48.751400 kernel: CPU features: detected: LSE atomic instructions Feb 9 18:38:48.751407 kernel: CPU features: detected: Privileged Access Never Feb 9 18:38:48.751442 kernel: CPU features: detected: RAS Extension Support Feb 9 18:38:48.751453 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 9 18:38:48.751460 kernel: CPU: All CPU(s) started at EL1 Feb 9 18:38:48.751467 kernel: alternatives: patching kernel code Feb 9 18:38:48.751476 kernel: devtmpfs: initialized Feb 9 18:38:48.751483 kernel: KASLR enabled Feb 9 18:38:48.751490 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 18:38:48.751497 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 9 18:38:48.751503 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 18:38:48.751510 kernel: SMBIOS 3.0.0 present. Feb 9 18:38:48.751517 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Feb 9 18:38:48.751524 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 18:38:48.751530 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 18:38:48.751537 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 18:38:48.751545 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 18:38:48.751552 kernel: audit: initializing netlink subsys (disabled) Feb 9 18:38:48.751559 kernel: audit: type=2000 audit(0.038:1): state=initialized audit_enabled=0 res=1 Feb 9 18:38:48.751566 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 18:38:48.751573 kernel: cpuidle: using governor menu Feb 9 18:38:48.751580 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 18:38:48.751587 kernel: ASID allocator initialised with 32768 entries Feb 9 18:38:48.751593 kernel: ACPI: bus type PCI registered Feb 9 18:38:48.751600 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 18:38:48.751608 kernel: Serial: AMBA PL011 UART driver Feb 9 18:38:48.751615 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 18:38:48.751622 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 18:38:48.751632 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 18:38:48.751639 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 18:38:48.751646 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 18:38:48.751653 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 18:38:48.751660 kernel: ACPI: Added _OSI(Module Device) Feb 9 18:38:48.751667 kernel: ACPI: Added _OSI(Processor Device) Feb 9 18:38:48.751675 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 18:38:48.751682 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 18:38:48.751689 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 18:38:48.751695 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 18:38:48.751702 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 18:38:48.751709 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 18:38:48.751716 kernel: ACPI: Interpreter enabled Feb 9 18:38:48.751723 kernel: ACPI: Using GIC for interrupt routing Feb 9 18:38:48.751729 kernel: ACPI: MCFG table detected, 1 entries Feb 9 18:38:48.751738 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 9 18:38:48.751747 kernel: printk: console [ttyAMA0] enabled Feb 9 18:38:48.751753 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 18:38:48.751902 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 18:38:48.751970 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 9 18:38:48.752030 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 9 18:38:48.752089 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 9 18:38:48.752152 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 9 18:38:48.752161 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 9 18:38:48.752168 kernel: PCI host bridge to bus 0000:00 Feb 9 18:38:48.752234 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 9 18:38:48.752289 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 9 18:38:48.752343 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 9 18:38:48.752406 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 18:38:48.752491 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 9 18:38:48.752562 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 18:38:48.752632 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 9 18:38:48.752699 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 9 18:38:48.752764 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 9 18:38:48.752827 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 9 18:38:48.752895 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 9 18:38:48.752963 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 9 18:38:48.753021 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 9 18:38:48.753080 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 9 18:38:48.753136 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 9 18:38:48.753145 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 9 18:38:48.753152 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 9 18:38:48.753159 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 9 18:38:48.753167 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 9 18:38:48.753174 kernel: iommu: Default domain type: Translated Feb 9 18:38:48.753181 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 18:38:48.753191 kernel: vgaarb: loaded Feb 9 18:38:48.753198 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 18:38:48.753205 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 18:38:48.753212 kernel: PTP clock support registered Feb 9 18:38:48.753218 kernel: Registered efivars operations Feb 9 18:38:48.753225 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 18:38:48.753232 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 18:38:48.753241 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 18:38:48.753247 kernel: pnp: PnP ACPI init Feb 9 18:38:48.753315 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 9 18:38:48.753325 kernel: pnp: PnP ACPI: found 1 devices Feb 9 18:38:48.753332 kernel: NET: Registered PF_INET protocol family Feb 9 18:38:48.753338 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 18:38:48.753345 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 18:38:48.753352 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 18:38:48.753366 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 18:38:48.753374 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 18:38:48.753380 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 18:38:48.753387 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 18:38:48.753394 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 18:38:48.753401 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 18:38:48.753408 kernel: PCI: CLS 0 bytes, default 64 Feb 9 18:38:48.753421 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 9 18:38:48.753430 kernel: kvm [1]: HYP mode not available Feb 9 18:38:48.753437 kernel: Initialise system trusted keyrings Feb 9 18:38:48.753444 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 18:38:48.753450 kernel: Key type asymmetric registered Feb 9 18:38:48.753457 kernel: Asymmetric key parser 'x509' registered Feb 9 18:38:48.753464 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 18:38:48.753471 kernel: io scheduler mq-deadline registered Feb 9 18:38:48.753477 kernel: io scheduler kyber registered Feb 9 18:38:48.753484 kernel: io scheduler bfq registered Feb 9 18:38:48.753491 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 9 18:38:48.753499 kernel: ACPI: button: Power Button [PWRB] Feb 9 18:38:48.753506 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 9 18:38:48.753576 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 9 18:38:48.753586 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 18:38:48.753592 kernel: thunder_xcv, ver 1.0 Feb 9 18:38:48.753599 kernel: thunder_bgx, ver 1.0 Feb 9 18:38:48.753606 kernel: nicpf, ver 1.0 Feb 9 18:38:48.753613 kernel: nicvf, ver 1.0 Feb 9 18:38:48.753691 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 18:38:48.753753 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T18:38:48 UTC (1707503928) Feb 9 18:38:48.753762 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 18:38:48.753769 kernel: NET: Registered PF_INET6 protocol family Feb 9 18:38:48.753775 kernel: Segment Routing with IPv6 Feb 9 18:38:48.753782 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 18:38:48.753789 kernel: NET: Registered PF_PACKET protocol family Feb 9 18:38:48.753796 kernel: Key type dns_resolver registered Feb 9 18:38:48.753803 kernel: registered taskstats version 1 Feb 9 18:38:48.753811 kernel: Loading compiled-in X.509 certificates Feb 9 18:38:48.753818 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 947a80114e81e2815f6db72a0d388260762488f9' Feb 9 18:38:48.753824 kernel: Key type .fscrypt registered Feb 9 18:38:48.753831 kernel: Key type fscrypt-provisioning registered Feb 9 18:38:48.753838 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 18:38:48.753845 kernel: ima: Allocated hash algorithm: sha1 Feb 9 18:38:48.753851 kernel: ima: No architecture policies found Feb 9 18:38:48.753858 kernel: Freeing unused kernel memory: 34688K Feb 9 18:38:48.753865 kernel: Run /init as init process Feb 9 18:38:48.753873 kernel: with arguments: Feb 9 18:38:48.753883 kernel: /init Feb 9 18:38:48.753890 kernel: with environment: Feb 9 18:38:48.753901 kernel: HOME=/ Feb 9 18:38:48.753908 kernel: TERM=linux Feb 9 18:38:48.753915 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 18:38:48.753924 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 18:38:48.753933 systemd[1]: Detected virtualization kvm. Feb 9 18:38:48.753942 systemd[1]: Detected architecture arm64. Feb 9 18:38:48.753949 systemd[1]: Running in initrd. Feb 9 18:38:48.753956 systemd[1]: No hostname configured, using default hostname. Feb 9 18:38:48.753964 systemd[1]: Hostname set to . Feb 9 18:38:48.753971 systemd[1]: Initializing machine ID from VM UUID. Feb 9 18:38:48.753979 systemd[1]: Queued start job for default target initrd.target. Feb 9 18:38:48.753986 systemd[1]: Started systemd-ask-password-console.path. Feb 9 18:38:48.753993 systemd[1]: Reached target cryptsetup.target. Feb 9 18:38:48.754002 systemd[1]: Reached target paths.target. Feb 9 18:38:48.754009 systemd[1]: Reached target slices.target. Feb 9 18:38:48.754016 systemd[1]: Reached target swap.target. Feb 9 18:38:48.754024 systemd[1]: Reached target timers.target. Feb 9 18:38:48.754031 systemd[1]: Listening on iscsid.socket. Feb 9 18:38:48.754039 systemd[1]: Listening on iscsiuio.socket. Feb 9 18:38:48.754046 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 18:38:48.754055 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 18:38:48.754063 systemd[1]: Listening on systemd-journald.socket. Feb 9 18:38:48.754070 systemd[1]: Listening on systemd-networkd.socket. Feb 9 18:38:48.754078 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 18:38:48.754085 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 18:38:48.754093 systemd[1]: Reached target sockets.target. Feb 9 18:38:48.754100 systemd[1]: Starting kmod-static-nodes.service... Feb 9 18:38:48.754107 systemd[1]: Finished network-cleanup.service. Feb 9 18:38:48.754115 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 18:38:48.754124 systemd[1]: Starting systemd-journald.service... Feb 9 18:38:48.754131 systemd[1]: Starting systemd-modules-load.service... Feb 9 18:38:48.754139 systemd[1]: Starting systemd-resolved.service... Feb 9 18:38:48.754146 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 18:38:48.754153 systemd[1]: Finished kmod-static-nodes.service. Feb 9 18:38:48.754161 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 18:38:48.754168 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 18:38:48.754179 systemd-journald[290]: Journal started Feb 9 18:38:48.754218 systemd-journald[290]: Runtime Journal (/run/log/journal/37e7e4711f0c4459a4e5498c4c428200) is 6.0M, max 48.7M, 42.6M free. Feb 9 18:38:48.748004 systemd-modules-load[291]: Inserted module 'overlay' Feb 9 18:38:48.755739 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 18:38:48.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:48.758430 kernel: audit: type=1130 audit(1707503928.756:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:48.758448 systemd[1]: Started systemd-journald.service. Feb 9 18:38:48.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:48.759775 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 18:38:48.766048 kernel: audit: type=1130 audit(1707503928.759:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:48.766067 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 18:38:48.766076 kernel: audit: type=1130 audit(1707503928.763:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:48.766085 kernel: Bridge firewalling registered Feb 9 18:38:48.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:48.765581 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 18:38:48.765974 systemd-modules-load[291]: Inserted module 'br_netfilter' Feb 9 18:38:48.779645 kernel: SCSI subsystem initialized Feb 9 18:38:48.782346 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 18:38:48.788440 kernel: audit: type=1130 audit(1707503928.783:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:48.788459 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 18:38:48.788469 kernel: device-mapper: uevent: version 1.0.3 Feb 9 18:38:48.788478 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 18:38:48.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:48.783093 systemd-resolved[292]: Positive Trust Anchors: Feb 9 18:38:48.783100 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 18:38:48.783127 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 18:38:48.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:48.783840 systemd[1]: Starting dracut-cmdline.service... Feb 9 18:38:48.801514 kernel: audit: type=1130 audit(1707503928.791:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:48.801537 kernel: audit: type=1130 audit(1707503928.798:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:48.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:48.789542 systemd-modules-load[291]: Inserted module 'dm_multipath' Feb 9 18:38:48.802200 dracut-cmdline[308]: dracut-dracut-053 Feb 9 18:38:48.789784 systemd-resolved[292]: Defaulting to hostname 'linux'. Feb 9 18:38:48.803843 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=680ffc8c0dfb23738bd19ec96ea37b5bbadfb5cebf23767d1d52c89a6d5c00b4 Feb 9 18:38:48.790231 systemd[1]: Finished systemd-modules-load.service. Feb 9 18:38:48.791554 systemd[1]: Started systemd-resolved.service. Feb 9 18:38:48.798564 systemd[1]: Reached target nss-lookup.target. Feb 9 18:38:48.801940 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:38:48.810074 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:38:48.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:48.813437 kernel: audit: type=1130 audit(1707503928.810:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:48.860437 kernel: Loading iSCSI transport class v2.0-870. Feb 9 18:38:48.868441 kernel: iscsi: registered transport (tcp) Feb 9 18:38:48.881708 kernel: iscsi: registered transport (qla4xxx) Feb 9 18:38:48.881723 kernel: QLogic iSCSI HBA Driver Feb 9 18:38:48.915693 systemd[1]: Finished dracut-cmdline.service. Feb 9 18:38:48.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:48.917160 systemd[1]: Starting dracut-pre-udev.service... Feb 9 18:38:48.919537 kernel: audit: type=1130 audit(1707503928.915:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:48.959437 kernel: raid6: neonx8 gen() 13570 MB/s Feb 9 18:38:48.976428 kernel: raid6: neonx8 xor() 10656 MB/s Feb 9 18:38:48.993432 kernel: raid6: neonx4 gen() 13318 MB/s Feb 9 18:38:49.010428 kernel: raid6: neonx4 xor() 11089 MB/s Feb 9 18:38:49.027430 kernel: raid6: neonx2 gen() 12928 MB/s Feb 9 18:38:49.044438 kernel: raid6: neonx2 xor() 10141 MB/s Feb 9 18:38:49.061424 kernel: raid6: neonx1 gen() 10318 MB/s Feb 9 18:38:49.078433 kernel: raid6: neonx1 xor() 8638 MB/s Feb 9 18:38:49.095430 kernel: raid6: int64x8 gen() 6197 MB/s Feb 9 18:38:49.112428 kernel: raid6: int64x8 xor() 3484 MB/s Feb 9 18:38:49.129425 kernel: raid6: int64x4 gen() 7141 MB/s Feb 9 18:38:49.146427 kernel: raid6: int64x4 xor() 3808 MB/s Feb 9 18:38:49.163426 kernel: raid6: int64x2 gen() 6054 MB/s Feb 9 18:38:49.180426 kernel: raid6: int64x2 xor() 3268 MB/s Feb 9 18:38:49.197429 kernel: raid6: int64x1 gen() 4971 MB/s Feb 9 18:38:49.214626 kernel: raid6: int64x1 xor() 2605 MB/s Feb 9 18:38:49.214637 kernel: raid6: using algorithm neonx8 gen() 13570 MB/s Feb 9 18:38:49.214646 kernel: raid6: .... xor() 10656 MB/s, rmw enabled Feb 9 18:38:49.214654 kernel: raid6: using neon recovery algorithm Feb 9 18:38:49.225770 kernel: xor: measuring software checksum speed Feb 9 18:38:49.225783 kernel: 8regs : 17289 MB/sec Feb 9 18:38:49.226629 kernel: 32regs : 20755 MB/sec Feb 9 18:38:49.227803 kernel: arm64_neon : 27778 MB/sec Feb 9 18:38:49.227813 kernel: xor: using function: arm64_neon (27778 MB/sec) Feb 9 18:38:49.282434 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 18:38:49.293150 systemd[1]: Finished dracut-pre-udev.service. Feb 9 18:38:49.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:49.294719 systemd[1]: Starting systemd-udevd.service... Feb 9 18:38:49.297204 kernel: audit: type=1130 audit(1707503929.293:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:49.293000 audit: BPF prog-id=7 op=LOAD Feb 9 18:38:49.293000 audit: BPF prog-id=8 op=LOAD Feb 9 18:38:49.311152 systemd-udevd[492]: Using default interface naming scheme 'v252'. Feb 9 18:38:49.315515 systemd[1]: Started systemd-udevd.service. Feb 9 18:38:49.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:49.316827 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 18:38:49.327999 dracut-pre-trigger[495]: rd.md=0: removing MD RAID activation Feb 9 18:38:49.356340 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 18:38:49.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:49.357929 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 18:38:49.392467 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 18:38:49.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:49.419432 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 9 18:38:49.421729 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 18:38:49.421755 kernel: GPT:9289727 != 19775487 Feb 9 18:38:49.421765 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 18:38:49.421774 kernel: GPT:9289727 != 19775487 Feb 9 18:38:49.422433 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 18:38:49.422445 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 18:38:49.435440 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (537) Feb 9 18:38:49.436951 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 18:38:49.444182 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 18:38:49.448934 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 18:38:49.449935 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 18:38:49.454103 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 18:38:49.455768 systemd[1]: Starting disk-uuid.service... Feb 9 18:38:49.461981 disk-uuid[562]: Primary Header is updated. Feb 9 18:38:49.461981 disk-uuid[562]: Secondary Entries is updated. Feb 9 18:38:49.461981 disk-uuid[562]: Secondary Header is updated. Feb 9 18:38:49.464761 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 18:38:50.478447 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 18:38:50.478496 disk-uuid[563]: The operation has completed successfully. Feb 9 18:38:50.500593 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 18:38:50.501572 systemd[1]: Finished disk-uuid.service. Feb 9 18:38:50.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:50.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:50.503052 systemd[1]: Starting verity-setup.service... Feb 9 18:38:50.518694 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 18:38:50.540358 systemd[1]: Found device dev-mapper-usr.device. Feb 9 18:38:50.542482 systemd[1]: Mounting sysusr-usr.mount... Feb 9 18:38:50.544452 systemd[1]: Finished verity-setup.service. Feb 9 18:38:50.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:50.594434 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 18:38:50.594590 systemd[1]: Mounted sysusr-usr.mount. Feb 9 18:38:50.595348 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 18:38:50.596057 systemd[1]: Starting ignition-setup.service... Feb 9 18:38:50.597776 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 18:38:50.604604 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 18:38:50.604637 kernel: BTRFS info (device vda6): using free space tree Feb 9 18:38:50.604648 kernel: BTRFS info (device vda6): has skinny extents Feb 9 18:38:50.612522 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 18:38:50.619609 systemd[1]: Finished ignition-setup.service. Feb 9 18:38:50.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:50.621709 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 18:38:50.674925 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 18:38:50.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:50.676000 audit: BPF prog-id=9 op=LOAD Feb 9 18:38:50.677070 systemd[1]: Starting systemd-networkd.service... Feb 9 18:38:50.701305 systemd-networkd[738]: lo: Link UP Feb 9 18:38:50.701990 systemd-networkd[738]: lo: Gained carrier Feb 9 18:38:50.702939 systemd-networkd[738]: Enumeration completed Feb 9 18:38:50.703697 systemd[1]: Started systemd-networkd.service. Feb 9 18:38:50.704477 systemd-networkd[738]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:38:50.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:50.705144 systemd[1]: Reached target network.target. Feb 9 18:38:50.704565 ignition[656]: Ignition 2.14.0 Feb 9 18:38:50.704572 ignition[656]: Stage: fetch-offline Feb 9 18:38:50.707371 systemd[1]: Starting iscsiuio.service... Feb 9 18:38:50.704610 ignition[656]: no configs at "/usr/lib/ignition/base.d" Feb 9 18:38:50.704618 ignition[656]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:38:50.704742 ignition[656]: parsed url from cmdline: "" Feb 9 18:38:50.704746 ignition[656]: no config URL provided Feb 9 18:38:50.704750 ignition[656]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 18:38:50.704758 ignition[656]: no config at "/usr/lib/ignition/user.ign" Feb 9 18:38:50.704773 ignition[656]: op(1): [started] loading QEMU firmware config module Feb 9 18:38:50.704778 ignition[656]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 9 18:38:50.713620 ignition[656]: op(1): [finished] loading QEMU firmware config module Feb 9 18:38:50.716666 systemd-networkd[738]: eth0: Link UP Feb 9 18:38:50.716674 systemd-networkd[738]: eth0: Gained carrier Feb 9 18:38:50.718292 systemd[1]: Started iscsiuio.service. Feb 9 18:38:50.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:50.719889 systemd[1]: Starting iscsid.service... Feb 9 18:38:50.723215 iscsid[745]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 18:38:50.723215 iscsid[745]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 18:38:50.723215 iscsid[745]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 18:38:50.723215 iscsid[745]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 18:38:50.723215 iscsid[745]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 18:38:50.723215 iscsid[745]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 18:38:50.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:50.726033 systemd[1]: Started iscsid.service. Feb 9 18:38:50.731440 systemd[1]: Starting dracut-initqueue.service... Feb 9 18:38:50.736523 systemd-networkd[738]: eth0: DHCPv4 address 10.0.0.109/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 18:38:50.743027 systemd[1]: Finished dracut-initqueue.service. Feb 9 18:38:50.743893 systemd[1]: Reached target remote-fs-pre.target. Feb 9 18:38:50.745013 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 18:38:50.746254 systemd[1]: Reached target remote-fs.target. Feb 9 18:38:50.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:50.748260 systemd[1]: Starting dracut-pre-mount.service... Feb 9 18:38:50.753334 ignition[656]: parsing config with SHA512: 469fcf0ec4fc3e4a84519a4688d81621da1bb759f9da4a5f7c150ccd302125c690a6b6d137c029a2c25c80c70bb4a31de4bfcef7c6cd22e33dc39b0f41e6e0ae Feb 9 18:38:50.756207 systemd[1]: Finished dracut-pre-mount.service. Feb 9 18:38:50.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:50.780021 unknown[656]: fetched base config from "system" Feb 9 18:38:50.780032 unknown[656]: fetched user config from "qemu" Feb 9 18:38:50.780435 ignition[656]: fetch-offline: fetch-offline passed Feb 9 18:38:50.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:50.781644 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 18:38:50.780492 ignition[656]: Ignition finished successfully Feb 9 18:38:50.782821 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 18:38:50.783647 systemd[1]: Starting ignition-kargs.service... Feb 9 18:38:50.792376 ignition[760]: Ignition 2.14.0 Feb 9 18:38:50.792387 ignition[760]: Stage: kargs Feb 9 18:38:50.792503 ignition[760]: no configs at "/usr/lib/ignition/base.d" Feb 9 18:38:50.792514 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:38:50.793499 ignition[760]: kargs: kargs passed Feb 9 18:38:50.793544 ignition[760]: Ignition finished successfully Feb 9 18:38:50.796832 systemd[1]: Finished ignition-kargs.service. Feb 9 18:38:50.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:50.798299 systemd[1]: Starting ignition-disks.service... Feb 9 18:38:50.805113 ignition[766]: Ignition 2.14.0 Feb 9 18:38:50.805123 ignition[766]: Stage: disks Feb 9 18:38:50.805210 ignition[766]: no configs at "/usr/lib/ignition/base.d" Feb 9 18:38:50.805220 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:38:50.806093 ignition[766]: disks: disks passed Feb 9 18:38:50.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:50.807590 systemd[1]: Finished ignition-disks.service. Feb 9 18:38:50.806139 ignition[766]: Ignition finished successfully Feb 9 18:38:50.808921 systemd[1]: Reached target initrd-root-device.target. Feb 9 18:38:50.809789 systemd[1]: Reached target local-fs-pre.target. Feb 9 18:38:50.810797 systemd[1]: Reached target local-fs.target. Feb 9 18:38:50.811770 systemd[1]: Reached target sysinit.target. Feb 9 18:38:50.812779 systemd[1]: Reached target basic.target. Feb 9 18:38:50.814690 systemd[1]: Starting systemd-fsck-root.service... Feb 9 18:38:50.833027 systemd-fsck[774]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 9 18:38:50.838007 systemd[1]: Finished systemd-fsck-root.service. Feb 9 18:38:50.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:50.840582 systemd[1]: Mounting sysroot.mount... Feb 9 18:38:50.847445 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 18:38:50.847648 systemd[1]: Mounted sysroot.mount. Feb 9 18:38:50.848330 systemd[1]: Reached target initrd-root-fs.target. Feb 9 18:38:50.850877 systemd[1]: Mounting sysroot-usr.mount... Feb 9 18:38:50.851585 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 18:38:50.851622 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 18:38:50.851645 systemd[1]: Reached target ignition-diskful.target. Feb 9 18:38:50.853476 systemd[1]: Mounted sysroot-usr.mount. Feb 9 18:38:50.854781 systemd[1]: Starting initrd-setup-root.service... Feb 9 18:38:50.858958 initrd-setup-root[784]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 18:38:50.862494 initrd-setup-root[792]: cut: /sysroot/etc/group: No such file or directory Feb 9 18:38:50.865920 initrd-setup-root[800]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 18:38:50.869023 initrd-setup-root[808]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 18:38:50.894485 systemd[1]: Finished initrd-setup-root.service. Feb 9 18:38:50.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:50.895958 systemd[1]: Starting ignition-mount.service... Feb 9 18:38:50.897178 systemd[1]: Starting sysroot-boot.service... Feb 9 18:38:50.901843 bash[825]: umount: /sysroot/usr/share/oem: not mounted. Feb 9 18:38:50.910131 ignition[827]: INFO : Ignition 2.14.0 Feb 9 18:38:50.910131 ignition[827]: INFO : Stage: mount Feb 9 18:38:50.911340 ignition[827]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 18:38:50.911340 ignition[827]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:38:50.911340 ignition[827]: INFO : mount: mount passed Feb 9 18:38:50.911340 ignition[827]: INFO : Ignition finished successfully Feb 9 18:38:50.914275 systemd[1]: Finished sysroot-boot.service. Feb 9 18:38:50.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:50.915159 systemd[1]: Finished ignition-mount.service. Feb 9 18:38:50.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:51.551872 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 18:38:51.559932 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (836) Feb 9 18:38:51.559968 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 18:38:51.559979 kernel: BTRFS info (device vda6): using free space tree Feb 9 18:38:51.560837 kernel: BTRFS info (device vda6): has skinny extents Feb 9 18:38:51.564500 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 18:38:51.565919 systemd[1]: Starting ignition-files.service... Feb 9 18:38:51.580683 ignition[856]: INFO : Ignition 2.14.0 Feb 9 18:38:51.580683 ignition[856]: INFO : Stage: files Feb 9 18:38:51.582174 ignition[856]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 18:38:51.582174 ignition[856]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:38:51.582174 ignition[856]: DEBUG : files: compiled without relabeling support, skipping Feb 9 18:38:51.585174 ignition[856]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 18:38:51.585174 ignition[856]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 18:38:51.587708 ignition[856]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 18:38:51.588920 ignition[856]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 18:38:51.588920 ignition[856]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 18:38:51.588920 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 18:38:51.588376 unknown[856]: wrote ssh authorized keys file for user: core Feb 9 18:38:51.594103 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Feb 9 18:38:51.912470 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 18:38:52.170149 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Feb 9 18:38:52.170149 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 18:38:52.174465 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 18:38:52.174465 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Feb 9 18:38:52.401857 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 18:38:52.545191 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Feb 9 18:38:52.547359 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 18:38:52.548679 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 18:38:52.548679 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Feb 9 18:38:52.600433 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 18:38:52.623885 systemd-networkd[738]: eth0: Gained IPv6LL Feb 9 18:38:52.862019 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Feb 9 18:38:52.862019 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 18:38:52.865643 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 18:38:52.865643 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Feb 9 18:38:52.886175 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 18:38:53.571235 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Feb 9 18:38:53.573394 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 18:38:53.573394 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Feb 9 18:38:53.573394 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 18:38:53.573394 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 18:38:53.573394 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 18:38:53.573394 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 18:38:53.580934 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 18:38:53.580934 ignition[856]: INFO : files: op(a): [started] processing unit "prepare-cni-plugins.service" Feb 9 18:38:53.580934 ignition[856]: INFO : files: op(a): op(b): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 18:38:53.580934 ignition[856]: INFO : files: op(a): op(b): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 18:38:53.580934 ignition[856]: INFO : files: op(a): [finished] processing unit "prepare-cni-plugins.service" Feb 9 18:38:53.580934 ignition[856]: INFO : files: op(c): [started] processing unit "prepare-critools.service" Feb 9 18:38:53.580934 ignition[856]: INFO : files: op(c): op(d): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 18:38:53.580934 ignition[856]: INFO : files: op(c): op(d): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 18:38:53.580934 ignition[856]: INFO : files: op(c): [finished] processing unit "prepare-critools.service" Feb 9 18:38:53.580934 ignition[856]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Feb 9 18:38:53.580934 ignition[856]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 18:38:53.580934 ignition[856]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 18:38:53.580934 ignition[856]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Feb 9 18:38:53.580934 ignition[856]: INFO : files: op(10): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 18:38:53.580934 ignition[856]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 18:38:53.580934 ignition[856]: INFO : files: op(11): [started] setting preset to enabled for "prepare-critools.service" Feb 9 18:38:53.580934 ignition[856]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 18:38:53.580934 ignition[856]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Feb 9 18:38:53.580934 ignition[856]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 18:38:53.623160 ignition[856]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 18:38:53.623160 ignition[856]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Feb 9 18:38:53.623160 ignition[856]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 18:38:53.623160 ignition[856]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 18:38:53.623160 ignition[856]: INFO : files: files passed Feb 9 18:38:53.623160 ignition[856]: INFO : Ignition finished successfully Feb 9 18:38:53.636872 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 9 18:38:53.636893 kernel: audit: type=1130 audit(1707503933.624:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.636912 kernel: audit: type=1130 audit(1707503933.634:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.624275 systemd[1]: Finished ignition-files.service. Feb 9 18:38:53.641350 kernel: audit: type=1130 audit(1707503933.637:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.641367 kernel: audit: type=1131 audit(1707503933.637:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.626215 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 18:38:53.629918 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 18:38:53.644649 initrd-setup-root-after-ignition[879]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 9 18:38:53.630595 systemd[1]: Starting ignition-quench.service... Feb 9 18:38:53.646336 initrd-setup-root-after-ignition[882]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 18:38:53.633317 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 18:38:53.634738 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 18:38:53.634813 systemd[1]: Finished ignition-quench.service. Feb 9 18:38:53.637714 systemd[1]: Reached target ignition-complete.target. Feb 9 18:38:53.642757 systemd[1]: Starting initrd-parse-etc.service... Feb 9 18:38:53.654834 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 18:38:53.654923 systemd[1]: Finished initrd-parse-etc.service. Feb 9 18:38:53.659964 kernel: audit: type=1130 audit(1707503933.655:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.659982 kernel: audit: type=1131 audit(1707503933.655:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.656170 systemd[1]: Reached target initrd-fs.target. Feb 9 18:38:53.660585 systemd[1]: Reached target initrd.target. Feb 9 18:38:53.661667 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 18:38:53.662388 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 18:38:53.672901 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 18:38:53.676496 kernel: audit: type=1130 audit(1707503933.673:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.674350 systemd[1]: Starting initrd-cleanup.service... Feb 9 18:38:53.682151 systemd[1]: Stopped target nss-lookup.target. Feb 9 18:38:53.682819 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 18:38:53.683837 systemd[1]: Stopped target timers.target. Feb 9 18:38:53.684847 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 18:38:53.684950 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 18:38:53.688886 kernel: audit: type=1131 audit(1707503933.685:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.685903 systemd[1]: Stopped target initrd.target. Feb 9 18:38:53.688530 systemd[1]: Stopped target basic.target. Feb 9 18:38:53.689406 systemd[1]: Stopped target ignition-complete.target. Feb 9 18:38:53.690383 systemd[1]: Stopped target ignition-diskful.target. Feb 9 18:38:53.691431 systemd[1]: Stopped target initrd-root-device.target. Feb 9 18:38:53.692481 systemd[1]: Stopped target remote-fs.target. Feb 9 18:38:53.693485 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 18:38:53.694546 systemd[1]: Stopped target sysinit.target. Feb 9 18:38:53.695446 systemd[1]: Stopped target local-fs.target. Feb 9 18:38:53.696425 systemd[1]: Stopped target local-fs-pre.target. Feb 9 18:38:53.697381 systemd[1]: Stopped target swap.target. Feb 9 18:38:53.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.701433 kernel: audit: type=1131 audit(1707503933.698:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.698235 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 18:38:53.698335 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 18:38:53.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.699301 systemd[1]: Stopped target cryptsetup.target. Feb 9 18:38:53.706245 kernel: audit: type=1131 audit(1707503933.702:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.702018 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 18:38:53.702118 systemd[1]: Stopped dracut-initqueue.service. Feb 9 18:38:53.703131 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 18:38:53.703220 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 18:38:53.705946 systemd[1]: Stopped target paths.target. Feb 9 18:38:53.706760 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 18:38:53.710479 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 18:38:53.711187 systemd[1]: Stopped target slices.target. Feb 9 18:38:53.712187 systemd[1]: Stopped target sockets.target. Feb 9 18:38:53.713123 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 18:38:53.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.713228 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 18:38:53.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.714172 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 18:38:53.714259 systemd[1]: Stopped ignition-files.service. Feb 9 18:38:53.717538 iscsid[745]: iscsid shutting down. Feb 9 18:38:53.716093 systemd[1]: Stopping ignition-mount.service... Feb 9 18:38:53.718552 systemd[1]: Stopping iscsid.service... Feb 9 18:38:53.719052 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 18:38:53.719169 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 18:38:53.720820 systemd[1]: Stopping sysroot-boot.service... Feb 9 18:38:53.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.721606 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 18:38:53.721740 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 18:38:53.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.724358 ignition[896]: INFO : Ignition 2.14.0 Feb 9 18:38:53.724358 ignition[896]: INFO : Stage: umount Feb 9 18:38:53.724358 ignition[896]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 18:38:53.724358 ignition[896]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:38:53.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.722708 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 18:38:53.729262 ignition[896]: INFO : umount: umount passed Feb 9 18:38:53.729262 ignition[896]: INFO : Ignition finished successfully Feb 9 18:38:53.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.722797 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 18:38:53.725279 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 18:38:53.725442 systemd[1]: Stopped iscsid.service. Feb 9 18:38:53.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.726462 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 18:38:53.726531 systemd[1]: Closed iscsid.socket. Feb 9 18:38:53.727921 systemd[1]: Stopping iscsiuio.service... Feb 9 18:38:53.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.728786 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 18:38:53.728919 systemd[1]: Finished initrd-cleanup.service. Feb 9 18:38:53.731735 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 18:38:53.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.732118 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 18:38:53.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.732207 systemd[1]: Stopped iscsiuio.service. Feb 9 18:38:53.734220 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 18:38:53.734356 systemd[1]: Stopped ignition-mount.service. Feb 9 18:38:53.736594 systemd[1]: Stopped target network.target. Feb 9 18:38:53.737914 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 18:38:53.737951 systemd[1]: Closed iscsiuio.socket. Feb 9 18:38:53.738522 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 18:38:53.738565 systemd[1]: Stopped ignition-disks.service. Feb 9 18:38:53.740359 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 18:38:53.740443 systemd[1]: Stopped ignition-kargs.service. Feb 9 18:38:53.741007 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 18:38:53.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.741040 systemd[1]: Stopped ignition-setup.service. Feb 9 18:38:53.742437 systemd[1]: Stopping systemd-networkd.service... Feb 9 18:38:53.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.743103 systemd[1]: Stopping systemd-resolved.service... Feb 9 18:38:53.752470 systemd-networkd[738]: eth0: DHCPv6 lease lost Feb 9 18:38:53.752559 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 18:38:53.760000 audit: BPF prog-id=6 op=UNLOAD Feb 9 18:38:53.752664 systemd[1]: Stopped systemd-resolved.service. Feb 9 18:38:53.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.754654 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 18:38:53.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.765000 audit: BPF prog-id=9 op=UNLOAD Feb 9 18:38:53.754750 systemd[1]: Stopped systemd-networkd.service. Feb 9 18:38:53.756705 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 18:38:53.756738 systemd[1]: Closed systemd-networkd.socket. Feb 9 18:38:53.758381 systemd[1]: Stopping network-cleanup.service... Feb 9 18:38:53.760053 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 18:38:53.760115 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 18:38:53.761877 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 18:38:53.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.761922 systemd[1]: Stopped systemd-sysctl.service. Feb 9 18:38:53.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.763140 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 18:38:53.763177 systemd[1]: Stopped systemd-modules-load.service. Feb 9 18:38:53.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.764263 systemd[1]: Stopping systemd-udevd.service... Feb 9 18:38:53.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.768912 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 18:38:53.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.772208 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 18:38:53.772314 systemd[1]: Stopped network-cleanup.service. Feb 9 18:38:53.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.774613 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 18:38:53.774735 systemd[1]: Stopped systemd-udevd.service. Feb 9 18:38:53.775978 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 18:38:53.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.776015 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 18:38:53.777331 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 18:38:53.777370 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 18:38:53.778743 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 18:38:53.778787 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 18:38:53.779508 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 18:38:53.779549 systemd[1]: Stopped dracut-cmdline.service. Feb 9 18:38:53.780708 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 18:38:53.780746 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 18:38:53.782754 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 18:38:53.784154 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 18:38:53.784214 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 18:38:53.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.788207 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 18:38:53.788292 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 18:38:53.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:53.801076 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 18:38:53.801168 systemd[1]: Stopped sysroot-boot.service. Feb 9 18:38:53.802391 systemd[1]: Reached target initrd-switch-root.target. Feb 9 18:38:53.803636 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 18:38:53.803687 systemd[1]: Stopped initrd-setup-root.service. Feb 9 18:38:53.805845 systemd[1]: Starting initrd-switch-root.service... Feb 9 18:38:53.812258 systemd[1]: Switching root. Feb 9 18:38:53.830625 systemd-journald[290]: Journal stopped Feb 9 18:38:55.901577 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Feb 9 18:38:55.901632 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 18:38:55.901645 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 18:38:55.901655 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 18:38:55.901671 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 18:38:55.901681 kernel: SELinux: policy capability open_perms=1 Feb 9 18:38:55.901691 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 18:38:55.901701 kernel: SELinux: policy capability always_check_network=0 Feb 9 18:38:55.901711 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 18:38:55.901720 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 18:38:55.901729 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 18:38:55.901739 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 18:38:55.901749 systemd[1]: Successfully loaded SELinux policy in 32.571ms. Feb 9 18:38:55.901771 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.112ms. Feb 9 18:38:55.901783 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 18:38:55.901794 systemd[1]: Detected virtualization kvm. Feb 9 18:38:55.901805 systemd[1]: Detected architecture arm64. Feb 9 18:38:55.901815 systemd[1]: Detected first boot. Feb 9 18:38:55.901825 systemd[1]: Initializing machine ID from VM UUID. Feb 9 18:38:55.901836 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 18:38:55.901853 systemd[1]: Populated /etc with preset unit settings. Feb 9 18:38:55.901865 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:38:55.901877 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:38:55.901889 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:38:55.901900 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 18:38:55.901912 systemd[1]: Stopped initrd-switch-root.service. Feb 9 18:38:55.901923 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 18:38:55.901934 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 18:38:55.901945 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 18:38:55.901956 systemd[1]: Created slice system-getty.slice. Feb 9 18:38:55.901966 systemd[1]: Created slice system-modprobe.slice. Feb 9 18:38:55.901976 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 18:38:55.901987 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 18:38:55.901997 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 18:38:55.902009 systemd[1]: Created slice user.slice. Feb 9 18:38:55.902020 systemd[1]: Started systemd-ask-password-console.path. Feb 9 18:38:55.902031 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 18:38:55.902041 systemd[1]: Set up automount boot.automount. Feb 9 18:38:55.902052 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 18:38:55.902062 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 18:38:55.902073 systemd[1]: Stopped target initrd-fs.target. Feb 9 18:38:55.902083 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 18:38:55.902094 systemd[1]: Reached target integritysetup.target. Feb 9 18:38:55.902104 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 18:38:55.902116 systemd[1]: Reached target remote-fs.target. Feb 9 18:38:55.902127 systemd[1]: Reached target slices.target. Feb 9 18:38:55.902138 systemd[1]: Reached target swap.target. Feb 9 18:38:55.902149 systemd[1]: Reached target torcx.target. Feb 9 18:38:55.902159 systemd[1]: Reached target veritysetup.target. Feb 9 18:38:55.902170 systemd[1]: Listening on systemd-coredump.socket. Feb 9 18:38:55.902180 systemd[1]: Listening on systemd-initctl.socket. Feb 9 18:38:55.902190 systemd[1]: Listening on systemd-networkd.socket. Feb 9 18:38:55.902201 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 18:38:55.902211 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 18:38:55.902224 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 18:38:55.902235 systemd[1]: Mounting dev-hugepages.mount... Feb 9 18:38:55.902245 systemd[1]: Mounting dev-mqueue.mount... Feb 9 18:38:55.902255 systemd[1]: Mounting media.mount... Feb 9 18:38:55.902265 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 18:38:55.902275 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 18:38:55.902286 systemd[1]: Mounting tmp.mount... Feb 9 18:38:55.902296 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 18:38:55.902307 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 18:38:55.902318 systemd[1]: Starting kmod-static-nodes.service... Feb 9 18:38:55.902355 systemd[1]: Starting modprobe@configfs.service... Feb 9 18:38:55.902367 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 18:38:55.902378 systemd[1]: Starting modprobe@drm.service... Feb 9 18:38:55.902389 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 18:38:55.902399 systemd[1]: Starting modprobe@fuse.service... Feb 9 18:38:55.902409 systemd[1]: Starting modprobe@loop.service... Feb 9 18:38:55.902427 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 18:38:55.902453 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 18:38:55.902466 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 18:38:55.902477 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 18:38:55.902488 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 18:38:55.902498 kernel: loop: module loaded Feb 9 18:38:55.902508 systemd[1]: Stopped systemd-journald.service. Feb 9 18:38:55.902518 kernel: fuse: init (API version 7.34) Feb 9 18:38:55.902530 systemd[1]: Starting systemd-journald.service... Feb 9 18:38:55.902542 systemd[1]: Starting systemd-modules-load.service... Feb 9 18:38:55.902552 systemd[1]: Starting systemd-network-generator.service... Feb 9 18:38:55.902563 systemd[1]: Starting systemd-remount-fs.service... Feb 9 18:38:55.902573 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 18:38:55.902585 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 18:38:55.902595 systemd[1]: Stopped verity-setup.service. Feb 9 18:38:55.902606 systemd[1]: Mounted dev-hugepages.mount. Feb 9 18:38:55.902616 systemd[1]: Mounted dev-mqueue.mount. Feb 9 18:38:55.902626 systemd[1]: Mounted media.mount. Feb 9 18:38:55.902637 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 18:38:55.902648 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 18:38:55.902659 systemd[1]: Mounted tmp.mount. Feb 9 18:38:55.902669 systemd[1]: Finished kmod-static-nodes.service. Feb 9 18:38:55.902679 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 18:38:55.902693 systemd-journald[999]: Journal started Feb 9 18:38:55.902736 systemd-journald[999]: Runtime Journal (/run/log/journal/37e7e4711f0c4459a4e5498c4c428200) is 6.0M, max 48.7M, 42.6M free. Feb 9 18:38:53.892000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 18:38:54.047000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 18:38:54.047000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 18:38:54.047000 audit: BPF prog-id=10 op=LOAD Feb 9 18:38:54.047000 audit: BPF prog-id=10 op=UNLOAD Feb 9 18:38:54.047000 audit: BPF prog-id=11 op=LOAD Feb 9 18:38:54.047000 audit: BPF prog-id=11 op=UNLOAD Feb 9 18:38:54.086000 audit[929]: AVC avc: denied { associate } for pid=929 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 18:38:54.086000 audit[929]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c58d4 a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=912 pid=929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:38:54.086000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 18:38:54.087000 audit[929]: AVC avc: denied { associate } for pid=929 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 18:38:54.087000 audit[929]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c59b9 a2=1ed a3=0 items=2 ppid=912 pid=929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:38:54.087000 audit: CWD cwd="/" Feb 9 18:38:54.087000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:38:54.087000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:38:54.087000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 18:38:55.776000 audit: BPF prog-id=12 op=LOAD Feb 9 18:38:55.776000 audit: BPF prog-id=3 op=UNLOAD Feb 9 18:38:55.777000 audit: BPF prog-id=13 op=LOAD Feb 9 18:38:55.777000 audit: BPF prog-id=14 op=LOAD Feb 9 18:38:55.777000 audit: BPF prog-id=4 op=UNLOAD Feb 9 18:38:55.777000 audit: BPF prog-id=5 op=UNLOAD Feb 9 18:38:55.777000 audit: BPF prog-id=15 op=LOAD Feb 9 18:38:55.777000 audit: BPF prog-id=12 op=UNLOAD Feb 9 18:38:55.777000 audit: BPF prog-id=16 op=LOAD Feb 9 18:38:55.777000 audit: BPF prog-id=17 op=LOAD Feb 9 18:38:55.777000 audit: BPF prog-id=13 op=UNLOAD Feb 9 18:38:55.777000 audit: BPF prog-id=14 op=UNLOAD Feb 9 18:38:55.778000 audit: BPF prog-id=18 op=LOAD Feb 9 18:38:55.778000 audit: BPF prog-id=15 op=UNLOAD Feb 9 18:38:55.778000 audit: BPF prog-id=19 op=LOAD Feb 9 18:38:55.778000 audit: BPF prog-id=20 op=LOAD Feb 9 18:38:55.778000 audit: BPF prog-id=16 op=UNLOAD Feb 9 18:38:55.778000 audit: BPF prog-id=17 op=UNLOAD Feb 9 18:38:55.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:55.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:55.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:55.790000 audit: BPF prog-id=18 op=UNLOAD Feb 9 18:38:55.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:55.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:55.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:55.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:55.873000 audit: BPF prog-id=21 op=LOAD Feb 9 18:38:55.875000 audit: BPF prog-id=22 op=LOAD Feb 9 18:38:55.875000 audit: BPF prog-id=23 op=LOAD Feb 9 18:38:55.875000 audit: BPF prog-id=19 op=UNLOAD Feb 9 18:38:55.875000 audit: BPF prog-id=20 op=UNLOAD Feb 9 18:38:55.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:55.899000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 18:38:55.899000 audit[999]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffe3511d70 a2=4000 a3=1 items=0 ppid=1 pid=999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:38:55.899000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 18:38:55.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:54.085151 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-09T18:38:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:38:55.903865 systemd[1]: Finished modprobe@configfs.service. Feb 9 18:38:55.775953 systemd[1]: Queued start job for default target multi-user.target. Feb 9 18:38:54.085396 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-09T18:38:54Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 18:38:55.775964 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 9 18:38:54.085432 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-09T18:38:54Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 18:38:55.779408 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 18:38:54.085463 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-09T18:38:54Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 18:38:54.085473 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-09T18:38:54Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 18:38:54.085498 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-09T18:38:54Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 18:38:55.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:55.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:54.085510 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-09T18:38:54Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 18:38:54.085697 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-09T18:38:54Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 18:38:54.085730 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-09T18:38:54Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 18:38:54.085742 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-09T18:38:54Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 18:38:54.086132 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-09T18:38:54Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 18:38:54.086165 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-09T18:38:54Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 18:38:54.086183 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-09T18:38:54Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 18:38:54.086197 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-09T18:38:54Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 18:38:54.086214 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-09T18:38:54Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 18:38:54.086227 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-09T18:38:54Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 18:38:55.521808 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-09T18:38:55Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:38:55.522067 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-09T18:38:55Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:38:55.522167 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-09T18:38:55Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:38:55.522326 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-09T18:38:55Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:38:55.522391 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-09T18:38:55Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 18:38:55.522473 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2024-02-09T18:38:55Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 18:38:55.905825 systemd[1]: Started systemd-journald.service. Feb 9 18:38:55.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:55.906518 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 18:38:55.906677 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 18:38:55.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:55.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:55.907701 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 18:38:55.907864 systemd[1]: Finished modprobe@drm.service. Feb 9 18:38:55.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:55.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:55.908823 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 18:38:55.908969 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 18:38:55.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:55.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:55.910134 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 18:38:55.910276 systemd[1]: Finished modprobe@fuse.service. Feb 9 18:38:55.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:55.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:55.911330 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 18:38:55.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:55.912321 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 18:38:55.912515 systemd[1]: Finished modprobe@loop.service. Feb 9 18:38:55.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:55.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:55.913678 systemd[1]: Finished systemd-modules-load.service. Feb 9 18:38:55.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:55.914742 systemd[1]: Finished systemd-network-generator.service. Feb 9 18:38:55.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:55.915834 systemd[1]: Finished systemd-remount-fs.service. Feb 9 18:38:55.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:55.917029 systemd[1]: Reached target network-pre.target. Feb 9 18:38:55.918960 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 18:38:55.920923 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 18:38:55.921838 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 18:38:55.923577 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 18:38:55.925598 systemd[1]: Starting systemd-journal-flush.service... Feb 9 18:38:55.926430 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 18:38:55.927589 systemd[1]: Starting systemd-random-seed.service... Feb 9 18:38:55.928393 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 18:38:55.929716 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:38:55.931836 systemd[1]: Starting systemd-sysusers.service... Feb 9 18:38:55.935485 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 18:38:55.936478 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 18:38:55.940869 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 18:38:55.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:55.943265 systemd[1]: Starting systemd-udev-settle.service... Feb 9 18:38:55.949382 systemd-journald[999]: Time spent on flushing to /var/log/journal/37e7e4711f0c4459a4e5498c4c428200 is 12.544ms for 1014 entries. Feb 9 18:38:55.949382 systemd-journald[999]: System Journal (/var/log/journal/37e7e4711f0c4459a4e5498c4c428200) is 8.0M, max 195.6M, 187.6M free. Feb 9 18:38:55.970669 systemd-journald[999]: Received client request to flush runtime journal. Feb 9 18:38:55.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:55.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:55.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:55.970870 udevadm[1030]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 18:38:55.950192 systemd[1]: Finished systemd-random-seed.service. Feb 9 18:38:55.951390 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:38:55.952506 systemd[1]: Reached target first-boot-complete.target. Feb 9 18:38:55.964464 systemd[1]: Finished systemd-sysusers.service. Feb 9 18:38:55.972246 systemd[1]: Finished systemd-journal-flush.service. Feb 9 18:38:55.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:56.289052 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 18:38:56.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:56.290000 audit: BPF prog-id=24 op=LOAD Feb 9 18:38:56.290000 audit: BPF prog-id=25 op=LOAD Feb 9 18:38:56.290000 audit: BPF prog-id=7 op=UNLOAD Feb 9 18:38:56.290000 audit: BPF prog-id=8 op=UNLOAD Feb 9 18:38:56.291279 systemd[1]: Starting systemd-udevd.service... Feb 9 18:38:56.310212 systemd-udevd[1032]: Using default interface naming scheme 'v252'. Feb 9 18:38:56.321142 systemd[1]: Started systemd-udevd.service. Feb 9 18:38:56.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:56.322000 audit: BPF prog-id=26 op=LOAD Feb 9 18:38:56.323650 systemd[1]: Starting systemd-networkd.service... Feb 9 18:38:56.329000 audit: BPF prog-id=27 op=LOAD Feb 9 18:38:56.329000 audit: BPF prog-id=28 op=LOAD Feb 9 18:38:56.329000 audit: BPF prog-id=29 op=LOAD Feb 9 18:38:56.330383 systemd[1]: Starting systemd-userdbd.service... Feb 9 18:38:56.351154 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Feb 9 18:38:56.375804 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 18:38:56.377764 systemd[1]: Started systemd-userdbd.service. Feb 9 18:38:56.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:56.422192 systemd-networkd[1040]: lo: Link UP Feb 9 18:38:56.422205 systemd-networkd[1040]: lo: Gained carrier Feb 9 18:38:56.422722 systemd-networkd[1040]: Enumeration completed Feb 9 18:38:56.422815 systemd[1]: Started systemd-networkd.service. Feb 9 18:38:56.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:56.423611 systemd-networkd[1040]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:38:56.424705 systemd-networkd[1040]: eth0: Link UP Feb 9 18:38:56.424717 systemd-networkd[1040]: eth0: Gained carrier Feb 9 18:38:56.432857 systemd[1]: Finished systemd-udev-settle.service. Feb 9 18:38:56.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:56.434992 systemd[1]: Starting lvm2-activation-early.service... Feb 9 18:38:56.446849 lvm[1065]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 18:38:56.457606 systemd-networkd[1040]: eth0: DHCPv4 address 10.0.0.109/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 18:38:56.480234 systemd[1]: Finished lvm2-activation-early.service. Feb 9 18:38:56.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:56.481053 systemd[1]: Reached target cryptsetup.target. Feb 9 18:38:56.482962 systemd[1]: Starting lvm2-activation.service... Feb 9 18:38:56.486397 lvm[1066]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 18:38:56.519297 systemd[1]: Finished lvm2-activation.service. Feb 9 18:38:56.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:56.520184 systemd[1]: Reached target local-fs-pre.target. Feb 9 18:38:56.521000 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 18:38:56.521030 systemd[1]: Reached target local-fs.target. Feb 9 18:38:56.521791 systemd[1]: Reached target machines.target. Feb 9 18:38:56.523678 systemd[1]: Starting ldconfig.service... Feb 9 18:38:56.524702 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 18:38:56.524795 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:38:56.526174 systemd[1]: Starting systemd-boot-update.service... Feb 9 18:38:56.528668 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 18:38:56.531015 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 18:38:56.532118 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 18:38:56.532159 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 18:38:56.533386 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 18:38:56.536252 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1068 (bootctl) Feb 9 18:38:56.537239 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 18:38:56.540976 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 18:38:56.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:56.546158 systemd-tmpfiles[1071]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 18:38:56.548107 systemd-tmpfiles[1071]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 18:38:56.553513 systemd-tmpfiles[1071]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 18:38:56.622788 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 18:38:56.623610 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 18:38:56.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:56.642515 systemd-fsck[1077]: fsck.fat 4.2 (2021-01-31) Feb 9 18:38:56.642515 systemd-fsck[1077]: /dev/vda1: 236 files, 113719/258078 clusters Feb 9 18:38:56.644696 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 18:38:56.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:56.647725 systemd[1]: Mounting boot.mount... Feb 9 18:38:56.654155 systemd[1]: Mounted boot.mount. Feb 9 18:38:56.662516 systemd[1]: Finished systemd-boot-update.service. Feb 9 18:38:56.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:56.719682 ldconfig[1067]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 18:38:56.723807 systemd[1]: Finished ldconfig.service. Feb 9 18:38:56.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:56.738912 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 18:38:56.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:56.741042 systemd[1]: Starting audit-rules.service... Feb 9 18:38:56.742787 systemd[1]: Starting clean-ca-certificates.service... Feb 9 18:38:56.744876 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 18:38:56.746000 audit: BPF prog-id=30 op=LOAD Feb 9 18:38:56.747168 systemd[1]: Starting systemd-resolved.service... Feb 9 18:38:56.748000 audit: BPF prog-id=31 op=LOAD Feb 9 18:38:56.750516 systemd[1]: Starting systemd-timesyncd.service... Feb 9 18:38:56.752441 systemd[1]: Starting systemd-update-utmp.service... Feb 9 18:38:56.753797 systemd[1]: Finished clean-ca-certificates.service. Feb 9 18:38:56.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:56.754902 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 18:38:56.756000 audit[1091]: SYSTEM_BOOT pid=1091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 18:38:56.759507 systemd[1]: Finished systemd-update-utmp.service. Feb 9 18:38:56.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:56.761651 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 18:38:56.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:56.763631 systemd[1]: Starting systemd-update-done.service... Feb 9 18:38:56.769153 systemd[1]: Finished systemd-update-done.service. Feb 9 18:38:56.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:38:56.789000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 18:38:56.789000 audit[1102]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffde3c2510 a2=420 a3=0 items=0 ppid=1080 pid=1102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:38:56.789000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 18:38:56.790174 augenrules[1102]: No rules Feb 9 18:38:56.791145 systemd[1]: Finished audit-rules.service. Feb 9 18:38:56.800589 systemd[1]: Started systemd-timesyncd.service. Feb 9 18:38:56.801747 systemd-timesyncd[1090]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 9 18:38:56.801810 systemd-timesyncd[1090]: Initial clock synchronization to Fri 2024-02-09 18:38:56.490811 UTC. Feb 9 18:38:56.801834 systemd[1]: Reached target time-set.target. Feb 9 18:38:56.802680 systemd-resolved[1084]: Positive Trust Anchors: Feb 9 18:38:56.802692 systemd-resolved[1084]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 18:38:56.802719 systemd-resolved[1084]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 18:38:56.812278 systemd-resolved[1084]: Defaulting to hostname 'linux'. Feb 9 18:38:56.813733 systemd[1]: Started systemd-resolved.service. Feb 9 18:38:56.814620 systemd[1]: Reached target network.target. Feb 9 18:38:56.815436 systemd[1]: Reached target nss-lookup.target. Feb 9 18:38:56.816195 systemd[1]: Reached target sysinit.target. Feb 9 18:38:56.817016 systemd[1]: Started motdgen.path. Feb 9 18:38:56.817702 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 18:38:56.818826 systemd[1]: Started logrotate.timer. Feb 9 18:38:56.819575 systemd[1]: Started mdadm.timer. Feb 9 18:38:56.820200 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 18:38:56.821016 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 18:38:56.821050 systemd[1]: Reached target paths.target. Feb 9 18:38:56.821728 systemd[1]: Reached target timers.target. Feb 9 18:38:56.822806 systemd[1]: Listening on dbus.socket. Feb 9 18:38:56.824537 systemd[1]: Starting docker.socket... Feb 9 18:38:56.827523 systemd[1]: Listening on sshd.socket. Feb 9 18:38:56.828181 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:38:56.828606 systemd[1]: Listening on docker.socket. Feb 9 18:38:56.829437 systemd[1]: Reached target sockets.target. Feb 9 18:38:56.830158 systemd[1]: Reached target basic.target. Feb 9 18:38:56.830896 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 18:38:56.830925 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 18:38:56.831924 systemd[1]: Starting containerd.service... Feb 9 18:38:56.833586 systemd[1]: Starting dbus.service... Feb 9 18:38:56.835065 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 18:38:56.837047 systemd[1]: Starting extend-filesystems.service... Feb 9 18:38:56.838520 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 18:38:56.840546 systemd[1]: Starting motdgen.service... Feb 9 18:38:56.842202 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 18:38:56.845897 systemd[1]: Starting prepare-critools.service... Feb 9 18:38:56.847700 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 18:38:56.849736 systemd[1]: Starting sshd-keygen.service... Feb 9 18:38:56.853299 extend-filesystems[1113]: Found vda Feb 9 18:38:56.853299 extend-filesystems[1113]: Found vda1 Feb 9 18:38:56.853299 extend-filesystems[1113]: Found vda2 Feb 9 18:38:56.853299 extend-filesystems[1113]: Found vda3 Feb 9 18:38:56.853299 extend-filesystems[1113]: Found usr Feb 9 18:38:56.853299 extend-filesystems[1113]: Found vda4 Feb 9 18:38:56.853299 extend-filesystems[1113]: Found vda6 Feb 9 18:38:56.853299 extend-filesystems[1113]: Found vda7 Feb 9 18:38:56.853299 extend-filesystems[1113]: Found vda9 Feb 9 18:38:56.853299 extend-filesystems[1113]: Checking size of /dev/vda9 Feb 9 18:38:56.884470 jq[1112]: false Feb 9 18:38:56.854058 systemd[1]: Starting systemd-logind.service... Feb 9 18:38:56.858446 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:38:56.860314 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 18:38:56.860864 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 18:38:56.885356 jq[1137]: true Feb 9 18:38:56.861776 systemd[1]: Starting update-engine.service... Feb 9 18:38:56.870017 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 18:38:56.872597 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 18:38:56.872767 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 18:38:56.874749 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 18:38:56.874918 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 18:38:56.888360 extend-filesystems[1113]: Resized partition /dev/vda9 Feb 9 18:38:56.891506 tar[1139]: ./ Feb 9 18:38:56.891506 tar[1139]: ./macvlan Feb 9 18:38:56.889728 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 18:38:56.889886 systemd[1]: Finished motdgen.service. Feb 9 18:38:56.905145 extend-filesystems[1145]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 18:38:56.910604 tar[1140]: crictl Feb 9 18:38:56.912159 dbus-daemon[1111]: [system] SELinux support is enabled Feb 9 18:38:56.912303 systemd[1]: Started dbus.service. Feb 9 18:38:56.914830 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 18:38:56.914857 systemd[1]: Reached target system-config.target. Feb 9 18:38:56.915705 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 18:38:56.915725 systemd[1]: Reached target user-config.target. Feb 9 18:38:56.919760 jq[1141]: true Feb 9 18:38:56.940470 tar[1139]: ./static Feb 9 18:38:56.942476 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 9 18:38:56.952660 systemd-logind[1126]: Watching system buttons on /dev/input/event0 (Power Button) Feb 9 18:38:56.953234 systemd-logind[1126]: New seat seat0. Feb 9 18:38:56.959375 systemd[1]: Started systemd-logind.service. Feb 9 18:38:56.973400 update_engine[1132]: I0209 18:38:56.971168 1132 main.cc:92] Flatcar Update Engine starting Feb 9 18:38:56.981099 systemd[1]: Started update-engine.service. Feb 9 18:38:56.984823 update_engine[1132]: I0209 18:38:56.981088 1132 update_check_scheduler.cc:74] Next update check in 4m58s Feb 9 18:38:56.983875 systemd[1]: Started locksmithd.service. Feb 9 18:38:56.986450 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 9 18:38:57.012706 extend-filesystems[1145]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 9 18:38:57.012706 extend-filesystems[1145]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 18:38:57.012706 extend-filesystems[1145]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 9 18:38:57.016615 tar[1139]: ./vlan Feb 9 18:38:57.007291 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 18:38:57.016714 bash[1165]: Updated "/home/core/.ssh/authorized_keys" Feb 9 18:38:57.016801 extend-filesystems[1113]: Resized filesystem in /dev/vda9 Feb 9 18:38:57.007475 systemd[1]: Finished extend-filesystems.service. Feb 9 18:38:57.014236 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 18:38:57.026309 env[1142]: time="2024-02-09T18:38:57.026253372Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 18:38:57.051445 tar[1139]: ./portmap Feb 9 18:38:57.057511 env[1142]: time="2024-02-09T18:38:57.057389023Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 18:38:57.057603 env[1142]: time="2024-02-09T18:38:57.057549495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:38:57.059142 env[1142]: time="2024-02-09T18:38:57.059096013Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:38:57.059142 env[1142]: time="2024-02-09T18:38:57.059137611Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:38:57.059478 env[1142]: time="2024-02-09T18:38:57.059451635Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:38:57.059478 env[1142]: time="2024-02-09T18:38:57.059476779Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 18:38:57.059531 env[1142]: time="2024-02-09T18:38:57.059490158Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 18:38:57.059531 env[1142]: time="2024-02-09T18:38:57.059500961Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 18:38:57.059600 env[1142]: time="2024-02-09T18:38:57.059581582Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:38:57.060032 env[1142]: time="2024-02-09T18:38:57.060007253Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:38:57.060157 env[1142]: time="2024-02-09T18:38:57.060135777Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:38:57.060157 env[1142]: time="2024-02-09T18:38:57.060155846Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 18:38:57.060230 env[1142]: time="2024-02-09T18:38:57.060213706Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 18:38:57.060268 env[1142]: time="2024-02-09T18:38:57.060229431Z" level=info msg="metadata content store policy set" policy=shared Feb 9 18:38:57.063798 env[1142]: time="2024-02-09T18:38:57.063766437Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 18:38:57.063871 env[1142]: time="2024-02-09T18:38:57.063800538Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 18:38:57.063871 env[1142]: time="2024-02-09T18:38:57.063821376Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 18:38:57.063871 env[1142]: time="2024-02-09T18:38:57.063853747Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 18:38:57.063871 env[1142]: time="2024-02-09T18:38:57.063868318Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 18:38:57.063958 env[1142]: time="2024-02-09T18:38:57.063882312Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 18:38:57.063958 env[1142]: time="2024-02-09T18:38:57.063894423Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 18:38:57.064381 env[1142]: time="2024-02-09T18:38:57.064297565Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 18:38:57.064421 env[1142]: time="2024-02-09T18:38:57.064389873Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 18:38:57.064421 env[1142]: time="2024-02-09T18:38:57.064406058Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 18:38:57.064468 env[1142]: time="2024-02-09T18:38:57.064426858Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 18:38:57.064468 env[1142]: time="2024-02-09T18:38:57.064440083Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 18:38:57.064564 env[1142]: time="2024-02-09T18:38:57.064543656Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 18:38:57.064643 env[1142]: time="2024-02-09T18:38:57.064627006Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 18:38:57.065434 env[1142]: time="2024-02-09T18:38:57.064975978Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 18:38:57.065434 env[1142]: time="2024-02-09T18:38:57.065017807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 18:38:57.065434 env[1142]: time="2024-02-09T18:38:57.065031493Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 18:38:57.065434 env[1142]: time="2024-02-09T18:38:57.065139026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 18:38:57.065434 env[1142]: time="2024-02-09T18:38:57.065153059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 18:38:57.065434 env[1142]: time="2024-02-09T18:38:57.065164054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 18:38:57.065434 env[1142]: time="2024-02-09T18:38:57.065174934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 18:38:57.065434 env[1142]: time="2024-02-09T18:38:57.065186622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 18:38:57.065434 env[1142]: time="2024-02-09T18:38:57.065199001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 18:38:57.065434 env[1142]: time="2024-02-09T18:38:57.065209382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 18:38:57.065434 env[1142]: time="2024-02-09T18:38:57.065219993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 18:38:57.065434 env[1142]: time="2024-02-09T18:38:57.065233410Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 18:38:57.065434 env[1142]: time="2024-02-09T18:38:57.065373929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 18:38:57.065434 env[1142]: time="2024-02-09T18:38:57.065388385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 18:38:57.065434 env[1142]: time="2024-02-09T18:38:57.065408069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 18:38:57.065765 env[1142]: time="2024-02-09T18:38:57.065429599Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 18:38:57.065765 env[1142]: time="2024-02-09T18:38:57.065445169Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 18:38:57.065765 env[1142]: time="2024-02-09T18:38:57.065457587Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 18:38:57.065765 env[1142]: time="2024-02-09T18:38:57.065473619Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 18:38:57.065765 env[1142]: time="2024-02-09T18:38:57.065504683Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 18:38:57.065863 env[1142]: time="2024-02-09T18:38:57.065692375Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 18:38:57.065863 env[1142]: time="2024-02-09T18:38:57.065748660Z" level=info msg="Connect containerd service" Feb 9 18:38:57.065863 env[1142]: time="2024-02-09T18:38:57.065778455Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 18:38:57.069715 env[1142]: time="2024-02-09T18:38:57.066387205Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 18:38:57.069715 env[1142]: time="2024-02-09T18:38:57.066758321Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 18:38:57.069715 env[1142]: time="2024-02-09T18:38:57.066794999Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 18:38:57.069715 env[1142]: time="2024-02-09T18:38:57.066834751Z" level=info msg="containerd successfully booted in 0.048762s" Feb 9 18:38:57.069715 env[1142]: time="2024-02-09T18:38:57.066839403Z" level=info msg="Start subscribing containerd event" Feb 9 18:38:57.069715 env[1142]: time="2024-02-09T18:38:57.066913258Z" level=info msg="Start recovering state" Feb 9 18:38:57.069715 env[1142]: time="2024-02-09T18:38:57.066975540Z" level=info msg="Start event monitor" Feb 9 18:38:57.069715 env[1142]: time="2024-02-09T18:38:57.066994109Z" level=info msg="Start snapshots syncer" Feb 9 18:38:57.069715 env[1142]: time="2024-02-09T18:38:57.067002913Z" level=info msg="Start cni network conf syncer for default" Feb 9 18:38:57.069715 env[1142]: time="2024-02-09T18:38:57.067011063Z" level=info msg="Start streaming server" Feb 9 18:38:57.068702 systemd[1]: Started containerd.service. Feb 9 18:38:57.088600 tar[1139]: ./host-local Feb 9 18:38:57.110226 tar[1139]: ./vrf Feb 9 18:38:57.138637 tar[1139]: ./bridge Feb 9 18:38:57.168099 tar[1139]: ./tuning Feb 9 18:38:57.180042 locksmithd[1166]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 18:38:57.190587 tar[1139]: ./firewall Feb 9 18:38:57.218842 tar[1139]: ./host-device Feb 9 18:38:57.244060 tar[1139]: ./sbr Feb 9 18:38:57.266986 tar[1139]: ./loopback Feb 9 18:38:57.289193 tar[1139]: ./dhcp Feb 9 18:38:57.351174 tar[1139]: ./ptp Feb 9 18:38:57.357744 systemd[1]: Finished prepare-critools.service. Feb 9 18:38:57.379460 tar[1139]: ./ipvlan Feb 9 18:38:57.405624 tar[1139]: ./bandwidth Feb 9 18:38:57.440265 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 18:38:57.743587 systemd-networkd[1040]: eth0: Gained IPv6LL Feb 9 18:38:57.839901 sshd_keygen[1129]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 18:38:57.856349 systemd[1]: Finished sshd-keygen.service. Feb 9 18:38:57.858576 systemd[1]: Starting issuegen.service... Feb 9 18:38:57.862763 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 18:38:57.862908 systemd[1]: Finished issuegen.service. Feb 9 18:38:57.865139 systemd[1]: Starting systemd-user-sessions.service... Feb 9 18:38:57.870847 systemd[1]: Finished systemd-user-sessions.service. Feb 9 18:38:57.872861 systemd[1]: Started getty@tty1.service. Feb 9 18:38:57.874789 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 9 18:38:57.875773 systemd[1]: Reached target getty.target. Feb 9 18:38:57.876552 systemd[1]: Reached target multi-user.target. Feb 9 18:38:57.878599 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 18:38:57.884604 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 18:38:57.884766 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 18:38:57.885751 systemd[1]: Startup finished in 598ms (kernel) + 5.273s (initrd) + 4.029s (userspace) = 9.901s. Feb 9 18:39:01.485349 systemd[1]: Created slice system-sshd.slice. Feb 9 18:39:01.486564 systemd[1]: Started sshd@0-10.0.0.109:22-10.0.0.1:38844.service. Feb 9 18:39:01.540542 sshd[1197]: Accepted publickey for core from 10.0.0.1 port 38844 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:39:01.542508 sshd[1197]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:39:01.550957 systemd[1]: Created slice user-500.slice. Feb 9 18:39:01.552118 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 18:39:01.553732 systemd-logind[1126]: New session 1 of user core. Feb 9 18:39:01.560008 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 18:39:01.561477 systemd[1]: Starting user@500.service... Feb 9 18:39:01.564362 (systemd)[1200]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:39:01.628664 systemd[1200]: Queued start job for default target default.target. Feb 9 18:39:01.629158 systemd[1200]: Reached target paths.target. Feb 9 18:39:01.629178 systemd[1200]: Reached target sockets.target. Feb 9 18:39:01.629190 systemd[1200]: Reached target timers.target. Feb 9 18:39:01.629202 systemd[1200]: Reached target basic.target. Feb 9 18:39:01.629255 systemd[1200]: Reached target default.target. Feb 9 18:39:01.629279 systemd[1200]: Startup finished in 58ms. Feb 9 18:39:01.629500 systemd[1]: Started user@500.service. Feb 9 18:39:01.630468 systemd[1]: Started session-1.scope. Feb 9 18:39:01.680476 systemd[1]: Started sshd@1-10.0.0.109:22-10.0.0.1:38846.service. Feb 9 18:39:01.725604 sshd[1209]: Accepted publickey for core from 10.0.0.1 port 38846 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:39:01.726870 sshd[1209]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:39:01.730813 systemd-logind[1126]: New session 2 of user core. Feb 9 18:39:01.731239 systemd[1]: Started session-2.scope. Feb 9 18:39:01.786274 sshd[1209]: pam_unix(sshd:session): session closed for user core Feb 9 18:39:01.789789 systemd[1]: Started sshd@2-10.0.0.109:22-10.0.0.1:38848.service. Feb 9 18:39:01.791336 systemd[1]: sshd@1-10.0.0.109:22-10.0.0.1:38846.service: Deactivated successfully. Feb 9 18:39:01.791966 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 18:39:01.792493 systemd-logind[1126]: Session 2 logged out. Waiting for processes to exit. Feb 9 18:39:01.793359 systemd-logind[1126]: Removed session 2. Feb 9 18:39:01.823093 sshd[1214]: Accepted publickey for core from 10.0.0.1 port 38848 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:39:01.824320 sshd[1214]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:39:01.827848 systemd-logind[1126]: New session 3 of user core. Feb 9 18:39:01.828296 systemd[1]: Started session-3.scope. Feb 9 18:39:01.876927 sshd[1214]: pam_unix(sshd:session): session closed for user core Feb 9 18:39:01.879772 systemd[1]: sshd@2-10.0.0.109:22-10.0.0.1:38848.service: Deactivated successfully. Feb 9 18:39:01.880362 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 18:39:01.880912 systemd-logind[1126]: Session 3 logged out. Waiting for processes to exit. Feb 9 18:39:01.881948 systemd[1]: Started sshd@3-10.0.0.109:22-10.0.0.1:38862.service. Feb 9 18:39:01.882540 systemd-logind[1126]: Removed session 3. Feb 9 18:39:01.915845 sshd[1221]: Accepted publickey for core from 10.0.0.1 port 38862 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:39:01.917052 sshd[1221]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:39:01.920510 systemd-logind[1126]: New session 4 of user core. Feb 9 18:39:01.921381 systemd[1]: Started session-4.scope. Feb 9 18:39:01.973709 sshd[1221]: pam_unix(sshd:session): session closed for user core Feb 9 18:39:01.977591 systemd[1]: sshd@3-10.0.0.109:22-10.0.0.1:38862.service: Deactivated successfully. Feb 9 18:39:01.978225 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 18:39:01.978836 systemd-logind[1126]: Session 4 logged out. Waiting for processes to exit. Feb 9 18:39:01.980553 systemd[1]: Started sshd@4-10.0.0.109:22-10.0.0.1:38878.service. Feb 9 18:39:01.981371 systemd-logind[1126]: Removed session 4. Feb 9 18:39:02.013993 sshd[1227]: Accepted publickey for core from 10.0.0.1 port 38878 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:39:02.015671 sshd[1227]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:39:02.019068 systemd-logind[1126]: New session 5 of user core. Feb 9 18:39:02.020546 systemd[1]: Started session-5.scope. Feb 9 18:39:02.078022 sudo[1230]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 18:39:02.078224 sudo[1230]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 18:39:02.598079 systemd[1]: Reloading. Feb 9 18:39:02.650736 /usr/lib/systemd/system-generators/torcx-generator[1260]: time="2024-02-09T18:39:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:39:02.650775 /usr/lib/systemd/system-generators/torcx-generator[1260]: time="2024-02-09T18:39:02Z" level=info msg="torcx already run" Feb 9 18:39:02.746286 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:39:02.746305 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:39:02.761131 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:39:02.823139 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 18:39:02.828769 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 18:39:02.829327 systemd[1]: Reached target network-online.target. Feb 9 18:39:02.830908 systemd[1]: Started kubelet.service. Feb 9 18:39:02.840855 systemd[1]: Starting coreos-metadata.service... Feb 9 18:39:02.847231 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 9 18:39:02.847385 systemd[1]: Finished coreos-metadata.service. Feb 9 18:39:03.014445 kubelet[1298]: E0209 18:39:03.013802 1298 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 18:39:03.017126 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 18:39:03.017246 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 18:39:03.127204 systemd[1]: Stopped kubelet.service. Feb 9 18:39:03.141295 systemd[1]: Reloading. Feb 9 18:39:03.200035 /usr/lib/systemd/system-generators/torcx-generator[1364]: time="2024-02-09T18:39:03Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:39:03.200073 /usr/lib/systemd/system-generators/torcx-generator[1364]: time="2024-02-09T18:39:03Z" level=info msg="torcx already run" Feb 9 18:39:03.251403 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:39:03.251427 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:39:03.266335 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:39:03.332914 systemd[1]: Started kubelet.service. Feb 9 18:39:03.370510 kubelet[1402]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 18:39:03.370510 kubelet[1402]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:39:03.370817 kubelet[1402]: I0209 18:39:03.370675 1402 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 18:39:03.371860 kubelet[1402]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 18:39:03.371860 kubelet[1402]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:39:04.429111 kubelet[1402]: I0209 18:39:04.429076 1402 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 18:39:04.429476 kubelet[1402]: I0209 18:39:04.429457 1402 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 18:39:04.429772 kubelet[1402]: I0209 18:39:04.429752 1402 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 18:39:04.433672 kubelet[1402]: I0209 18:39:04.433644 1402 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 18:39:04.435422 kubelet[1402]: W0209 18:39:04.435388 1402 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 18:39:04.436536 kubelet[1402]: I0209 18:39:04.436513 1402 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 18:39:04.436951 kubelet[1402]: I0209 18:39:04.436935 1402 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 18:39:04.437094 kubelet[1402]: I0209 18:39:04.437079 1402 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 18:39:04.437277 kubelet[1402]: I0209 18:39:04.437264 1402 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 18:39:04.437354 kubelet[1402]: I0209 18:39:04.437345 1402 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 18:39:04.437580 kubelet[1402]: I0209 18:39:04.437559 1402 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:39:04.441831 kubelet[1402]: I0209 18:39:04.441811 1402 kubelet.go:398] "Attempting to sync node with API server" Feb 9 18:39:04.441924 kubelet[1402]: I0209 18:39:04.441913 1402 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 18:39:04.442248 kubelet[1402]: I0209 18:39:04.442235 1402 kubelet.go:297] "Adding apiserver pod source" Feb 9 18:39:04.442321 kubelet[1402]: I0209 18:39:04.442309 1402 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 18:39:04.442385 kubelet[1402]: E0209 18:39:04.442328 1402 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:04.442452 kubelet[1402]: E0209 18:39:04.442435 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:04.443183 kubelet[1402]: I0209 18:39:04.443166 1402 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 18:39:04.444040 kubelet[1402]: W0209 18:39:04.444020 1402 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 18:39:04.444611 kubelet[1402]: I0209 18:39:04.444591 1402 server.go:1186] "Started kubelet" Feb 9 18:39:04.445065 kubelet[1402]: E0209 18:39:04.445045 1402 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 18:39:04.445065 kubelet[1402]: E0209 18:39:04.445068 1402 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 18:39:04.445516 kubelet[1402]: I0209 18:39:04.445491 1402 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 18:39:04.446746 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 18:39:04.446801 kubelet[1402]: I0209 18:39:04.446338 1402 server.go:451] "Adding debug handlers to kubelet server" Feb 9 18:39:04.446977 kubelet[1402]: I0209 18:39:04.446951 1402 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 18:39:04.447177 kubelet[1402]: I0209 18:39:04.447159 1402 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 18:39:04.447238 kubelet[1402]: I0209 18:39:04.447227 1402 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 18:39:04.448362 kubelet[1402]: E0209 18:39:04.448301 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:04.456174 kubelet[1402]: W0209 18:39:04.456133 1402 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.109" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 18:39:04.456282 kubelet[1402]: E0209 18:39:04.456269 1402 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.109" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 18:39:04.456623 kubelet[1402]: W0209 18:39:04.456599 1402 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 18:39:04.456670 kubelet[1402]: E0209 18:39:04.456627 1402 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 18:39:04.456670 kubelet[1402]: E0209 18:39:04.456655 1402 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "10.0.0.109" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 18:39:04.459322 kubelet[1402]: E0209 18:39:04.459233 1402 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.109.17b245ca6d325ad3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.109", UID:"10.0.0.109", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.109"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 444566227, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 444566227, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:39:04.459820 kubelet[1402]: W0209 18:39:04.459802 1402 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 18:39:04.459947 kubelet[1402]: E0209 18:39:04.459934 1402 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 18:39:04.463821 kubelet[1402]: E0209 18:39:04.463741 1402 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.109.17b245ca6d39e2ea", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.109", UID:"10.0.0.109", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.109"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 445059818, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 445059818, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:39:04.464380 kubelet[1402]: I0209 18:39:04.464355 1402 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 18:39:04.464380 kubelet[1402]: I0209 18:39:04.464372 1402 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 18:39:04.464489 kubelet[1402]: I0209 18:39:04.464464 1402 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:39:04.465489 kubelet[1402]: E0209 18:39:04.465406 1402 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.109.17b245ca6e576151", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.109", UID:"10.0.0.109", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.109 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.109"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 463769937, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 463769937, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:39:04.466207 kubelet[1402]: I0209 18:39:04.466163 1402 policy_none.go:49] "None policy: Start" Feb 9 18:39:04.466452 kubelet[1402]: E0209 18:39:04.466356 1402 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.109.17b245ca6e5774b4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.109", UID:"10.0.0.109", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.109 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.109"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 463774900, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 463774900, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:39:04.466936 kubelet[1402]: I0209 18:39:04.466899 1402 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 18:39:04.467032 kubelet[1402]: I0209 18:39:04.467020 1402 state_mem.go:35] "Initializing new in-memory state store" Feb 9 18:39:04.467358 kubelet[1402]: E0209 18:39:04.467288 1402 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.109.17b245ca6e577fc8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.109", UID:"10.0.0.109", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.109 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.109"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 463777736, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 463777736, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:39:04.474518 systemd[1]: Created slice kubepods.slice. Feb 9 18:39:04.478363 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 18:39:04.481149 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 18:39:04.493135 kubelet[1402]: I0209 18:39:04.493095 1402 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 18:39:04.493352 kubelet[1402]: I0209 18:39:04.493330 1402 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 18:39:04.493907 kubelet[1402]: E0209 18:39:04.493828 1402 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.109\" not found" Feb 9 18:39:04.495468 kubelet[1402]: E0209 18:39:04.495375 1402 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.109.17b245ca70290e5f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.109", UID:"10.0.0.109", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.109"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 494288479, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 494288479, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:39:04.549397 kubelet[1402]: I0209 18:39:04.549354 1402 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.109" Feb 9 18:39:04.551159 kubelet[1402]: E0209 18:39:04.551124 1402 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.109" Feb 9 18:39:04.551305 kubelet[1402]: E0209 18:39:04.551130 1402 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.109.17b245ca6e576151", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.109", UID:"10.0.0.109", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.109 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.109"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 463769937, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 549312301, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.109.17b245ca6e576151" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:39:04.552317 kubelet[1402]: E0209 18:39:04.552253 1402 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.109.17b245ca6e5774b4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.109", UID:"10.0.0.109", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.109 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.109"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 463774900, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 549319942, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.109.17b245ca6e5774b4" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:39:04.553149 kubelet[1402]: E0209 18:39:04.553087 1402 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.109.17b245ca6e577fc8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.109", UID:"10.0.0.109", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.109 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.109"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 463777736, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 549323881, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.109.17b245ca6e577fc8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:39:04.594261 kubelet[1402]: I0209 18:39:04.594225 1402 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 18:39:04.613317 kubelet[1402]: I0209 18:39:04.613282 1402 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 18:39:04.613317 kubelet[1402]: I0209 18:39:04.613306 1402 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 18:39:04.613317 kubelet[1402]: I0209 18:39:04.613322 1402 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 18:39:04.613481 kubelet[1402]: E0209 18:39:04.613365 1402 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 18:39:04.615209 kubelet[1402]: W0209 18:39:04.615185 1402 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 18:39:04.615209 kubelet[1402]: E0209 18:39:04.615210 1402 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 18:39:04.658741 kubelet[1402]: E0209 18:39:04.658702 1402 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "10.0.0.109" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 18:39:04.752802 kubelet[1402]: I0209 18:39:04.752721 1402 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.109" Feb 9 18:39:04.754426 kubelet[1402]: E0209 18:39:04.754389 1402 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.109" Feb 9 18:39:04.755407 kubelet[1402]: E0209 18:39:04.755325 1402 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.109.17b245ca6e576151", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.109", UID:"10.0.0.109", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.109 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.109"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 463769937, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 752677390, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.109.17b245ca6e576151" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:39:04.756341 kubelet[1402]: E0209 18:39:04.756279 1402 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.109.17b245ca6e5774b4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.109", UID:"10.0.0.109", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.109 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.109"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 463774900, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 752690428, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.109.17b245ca6e5774b4" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:39:04.846887 kubelet[1402]: E0209 18:39:04.846793 1402 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.109.17b245ca6e577fc8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.109", UID:"10.0.0.109", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.109 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.109"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 463777736, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 752695864, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.109.17b245ca6e577fc8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:39:05.060745 kubelet[1402]: E0209 18:39:05.060647 1402 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "10.0.0.109" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 18:39:05.155839 kubelet[1402]: I0209 18:39:05.155818 1402 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.109" Feb 9 18:39:05.157308 kubelet[1402]: E0209 18:39:05.157278 1402 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.109" Feb 9 18:39:05.157486 kubelet[1402]: E0209 18:39:05.157399 1402 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.109.17b245ca6e576151", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.109", UID:"10.0.0.109", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.109 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.109"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 463769937, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 39, 5, 155783651, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.109.17b245ca6e576151" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:39:05.246995 kubelet[1402]: E0209 18:39:05.246896 1402 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.109.17b245ca6e5774b4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.109", UID:"10.0.0.109", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.109 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.109"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 463774900, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 39, 5, 155790992, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.109.17b245ca6e5774b4" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:39:05.442817 kubelet[1402]: E0209 18:39:05.442786 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:05.447074 kubelet[1402]: E0209 18:39:05.446969 1402 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.109.17b245ca6e577fc8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.109", UID:"10.0.0.109", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.109 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.109"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 463777736, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 39, 5, 155793873, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.109.17b245ca6e577fc8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:39:05.463805 kubelet[1402]: W0209 18:39:05.463777 1402 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.109" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 18:39:05.463854 kubelet[1402]: E0209 18:39:05.463810 1402 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.109" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 18:39:05.498027 kubelet[1402]: W0209 18:39:05.497997 1402 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 18:39:05.498027 kubelet[1402]: E0209 18:39:05.498026 1402 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 18:39:05.862671 kubelet[1402]: E0209 18:39:05.862626 1402 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "10.0.0.109" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 18:39:05.929733 kubelet[1402]: W0209 18:39:05.929698 1402 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 18:39:05.929733 kubelet[1402]: E0209 18:39:05.929733 1402 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 18:39:05.946953 kubelet[1402]: W0209 18:39:05.946913 1402 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 18:39:05.946953 kubelet[1402]: E0209 18:39:05.946946 1402 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 18:39:05.958889 kubelet[1402]: I0209 18:39:05.958858 1402 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.109" Feb 9 18:39:05.959926 kubelet[1402]: E0209 18:39:05.959900 1402 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.109" Feb 9 18:39:05.959926 kubelet[1402]: E0209 18:39:05.959847 1402 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.109.17b245ca6e576151", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.109", UID:"10.0.0.109", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.109 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.109"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 463769937, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 39, 5, 958813811, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.109.17b245ca6e576151" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:39:05.960755 kubelet[1402]: E0209 18:39:05.960687 1402 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.109.17b245ca6e5774b4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.109", UID:"10.0.0.109", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.109 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.109"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 463774900, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 39, 5, 958827269, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.109.17b245ca6e5774b4" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:39:06.047260 kubelet[1402]: E0209 18:39:06.047159 1402 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.109.17b245ca6e577fc8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.109", UID:"10.0.0.109", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.109 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.109"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 463777736, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 39, 5, 958830979, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.109.17b245ca6e577fc8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:39:06.443590 kubelet[1402]: E0209 18:39:06.443541 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:07.175607 kubelet[1402]: W0209 18:39:07.175564 1402 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 18:39:07.175607 kubelet[1402]: E0209 18:39:07.175597 1402 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 18:39:07.444537 kubelet[1402]: E0209 18:39:07.444402 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:07.463371 kubelet[1402]: E0209 18:39:07.463340 1402 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "10.0.0.109" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 18:39:07.551716 kubelet[1402]: W0209 18:39:07.551688 1402 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 18:39:07.551716 kubelet[1402]: E0209 18:39:07.551719 1402 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 18:39:07.561354 kubelet[1402]: I0209 18:39:07.561329 1402 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.109" Feb 9 18:39:07.562446 kubelet[1402]: E0209 18:39:07.562410 1402 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.109" Feb 9 18:39:07.562492 kubelet[1402]: E0209 18:39:07.562402 1402 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.109.17b245ca6e576151", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.109", UID:"10.0.0.109", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.109 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.109"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 463769937, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 39, 7, 561294292, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.109.17b245ca6e576151" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:39:07.563264 kubelet[1402]: E0209 18:39:07.563192 1402 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.109.17b245ca6e5774b4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.109", UID:"10.0.0.109", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.109 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.109"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 463774900, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 39, 7, 561303952, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.109.17b245ca6e5774b4" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:39:07.563943 kubelet[1402]: E0209 18:39:07.563881 1402 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.109.17b245ca6e577fc8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.109", UID:"10.0.0.109", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.109 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.109"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 463777736, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 39, 7, 561306644, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.109.17b245ca6e577fc8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:39:08.187033 kubelet[1402]: W0209 18:39:08.186988 1402 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.109" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 18:39:08.187033 kubelet[1402]: E0209 18:39:08.187026 1402 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.109" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 18:39:08.445143 kubelet[1402]: E0209 18:39:08.445036 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:08.484320 kubelet[1402]: W0209 18:39:08.484289 1402 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 18:39:08.484320 kubelet[1402]: E0209 18:39:08.484321 1402 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 18:39:09.445467 kubelet[1402]: E0209 18:39:09.445407 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:10.446485 kubelet[1402]: E0209 18:39:10.446435 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:10.665448 kubelet[1402]: E0209 18:39:10.665398 1402 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "10.0.0.109" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 18:39:10.763558 kubelet[1402]: I0209 18:39:10.763328 1402 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.109" Feb 9 18:39:10.764393 kubelet[1402]: E0209 18:39:10.764309 1402 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.109.17b245ca6e576151", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.109", UID:"10.0.0.109", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.109 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.109"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 463769937, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 39, 10, 763279475, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.109.17b245ca6e576151" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:39:10.764494 kubelet[1402]: E0209 18:39:10.764431 1402 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.109" Feb 9 18:39:10.765220 kubelet[1402]: E0209 18:39:10.765148 1402 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.109.17b245ca6e5774b4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.109", UID:"10.0.0.109", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.109 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.109"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 463774900, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 39, 10, 763291076, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.109.17b245ca6e5774b4" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:39:10.765951 kubelet[1402]: E0209 18:39:10.765886 1402 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.109.17b245ca6e577fc8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.109", UID:"10.0.0.109", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.109 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.109"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 463777736, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 39, 10, 763293976, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.109.17b245ca6e577fc8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:39:11.447392 kubelet[1402]: E0209 18:39:11.447346 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:12.448312 kubelet[1402]: E0209 18:39:12.448262 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:12.537750 kubelet[1402]: W0209 18:39:12.537716 1402 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 18:39:12.537941 kubelet[1402]: E0209 18:39:12.537925 1402 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 18:39:13.154795 kubelet[1402]: W0209 18:39:13.154751 1402 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 18:39:13.154795 kubelet[1402]: E0209 18:39:13.154793 1402 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 18:39:13.389375 kubelet[1402]: W0209 18:39:13.389343 1402 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 18:39:13.389586 kubelet[1402]: E0209 18:39:13.389571 1402 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 18:39:13.449039 kubelet[1402]: E0209 18:39:13.448782 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:13.613143 kubelet[1402]: W0209 18:39:13.613108 1402 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.109" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 18:39:13.613329 kubelet[1402]: E0209 18:39:13.613316 1402 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.109" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 18:39:14.431510 kubelet[1402]: I0209 18:39:14.431443 1402 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 9 18:39:14.449649 kubelet[1402]: E0209 18:39:14.449610 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:14.494302 kubelet[1402]: E0209 18:39:14.494245 1402 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.109\" not found" Feb 9 18:39:14.824365 kubelet[1402]: E0209 18:39:14.824330 1402 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.109" not found Feb 9 18:39:15.450253 kubelet[1402]: E0209 18:39:15.450189 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:15.863817 kubelet[1402]: E0209 18:39:15.863765 1402 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.109" not found Feb 9 18:39:16.451336 kubelet[1402]: E0209 18:39:16.451289 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:17.069704 kubelet[1402]: E0209 18:39:17.069653 1402 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.109\" not found" node="10.0.0.109" Feb 9 18:39:17.165574 kubelet[1402]: I0209 18:39:17.165533 1402 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.109" Feb 9 18:39:17.268340 kubelet[1402]: I0209 18:39:17.268295 1402 kubelet_node_status.go:73] "Successfully registered node" node="10.0.0.109" Feb 9 18:39:17.271807 kubelet[1402]: I0209 18:39:17.271777 1402 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 9 18:39:17.272170 env[1142]: time="2024-02-09T18:39:17.272119302Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 18:39:17.272407 kubelet[1402]: I0209 18:39:17.272324 1402 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 9 18:39:17.278949 kubelet[1402]: E0209 18:39:17.278913 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:17.346064 sudo[1230]: pam_unix(sudo:session): session closed for user root Feb 9 18:39:17.347869 sshd[1227]: pam_unix(sshd:session): session closed for user core Feb 9 18:39:17.350382 systemd[1]: sshd@4-10.0.0.109:22-10.0.0.1:38878.service: Deactivated successfully. Feb 9 18:39:17.351054 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 18:39:17.351984 systemd-logind[1126]: Session 5 logged out. Waiting for processes to exit. Feb 9 18:39:17.352819 systemd-logind[1126]: Removed session 5. Feb 9 18:39:17.380046 kubelet[1402]: E0209 18:39:17.380007 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:17.451719 kubelet[1402]: E0209 18:39:17.451678 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:17.481029 kubelet[1402]: E0209 18:39:17.481000 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:17.581557 kubelet[1402]: E0209 18:39:17.581511 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:17.682353 kubelet[1402]: E0209 18:39:17.682245 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:17.782696 kubelet[1402]: E0209 18:39:17.782669 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:17.883125 kubelet[1402]: E0209 18:39:17.883084 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:17.983764 kubelet[1402]: E0209 18:39:17.983668 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:18.084183 kubelet[1402]: E0209 18:39:18.084154 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:18.184474 kubelet[1402]: E0209 18:39:18.184445 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:18.284950 kubelet[1402]: E0209 18:39:18.284879 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:18.385522 kubelet[1402]: E0209 18:39:18.385484 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:18.452134 kubelet[1402]: E0209 18:39:18.452100 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:18.486372 kubelet[1402]: E0209 18:39:18.486336 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:18.586855 kubelet[1402]: E0209 18:39:18.586827 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:18.687271 kubelet[1402]: E0209 18:39:18.687236 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:18.787727 kubelet[1402]: E0209 18:39:18.787692 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:18.888224 kubelet[1402]: E0209 18:39:18.888107 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:18.988680 kubelet[1402]: E0209 18:39:18.988657 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:19.089040 kubelet[1402]: E0209 18:39:19.089010 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:19.189514 kubelet[1402]: E0209 18:39:19.189432 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:19.289907 kubelet[1402]: E0209 18:39:19.289872 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:19.390358 kubelet[1402]: E0209 18:39:19.390291 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:19.452959 kubelet[1402]: E0209 18:39:19.452866 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:19.491010 kubelet[1402]: E0209 18:39:19.490980 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:19.591243 kubelet[1402]: E0209 18:39:19.591209 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:19.691682 kubelet[1402]: E0209 18:39:19.691665 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:19.792128 kubelet[1402]: E0209 18:39:19.792060 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:19.892529 kubelet[1402]: E0209 18:39:19.892485 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:19.992967 kubelet[1402]: E0209 18:39:19.992935 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:20.093369 kubelet[1402]: E0209 18:39:20.093326 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:20.193744 kubelet[1402]: E0209 18:39:20.193711 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:20.294120 kubelet[1402]: E0209 18:39:20.294093 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:20.394478 kubelet[1402]: E0209 18:39:20.394402 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:20.452951 kubelet[1402]: E0209 18:39:20.452925 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:20.495055 kubelet[1402]: E0209 18:39:20.495028 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:20.595692 kubelet[1402]: E0209 18:39:20.595660 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:20.696213 kubelet[1402]: E0209 18:39:20.696117 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:20.796649 kubelet[1402]: E0209 18:39:20.796613 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:20.897105 kubelet[1402]: E0209 18:39:20.897069 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:20.997816 kubelet[1402]: E0209 18:39:20.997713 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:21.098192 kubelet[1402]: E0209 18:39:21.098163 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:21.198615 kubelet[1402]: E0209 18:39:21.198591 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:21.299094 kubelet[1402]: E0209 18:39:21.299026 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:21.399444 kubelet[1402]: E0209 18:39:21.399407 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:21.453467 kubelet[1402]: E0209 18:39:21.453444 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:21.499670 kubelet[1402]: E0209 18:39:21.499636 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:21.600246 kubelet[1402]: E0209 18:39:21.600226 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:21.700776 kubelet[1402]: E0209 18:39:21.700720 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:21.801243 kubelet[1402]: E0209 18:39:21.801188 1402 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.109\" not found" Feb 9 18:39:22.453878 kubelet[1402]: I0209 18:39:22.453846 1402 apiserver.go:52] "Watching apiserver" Feb 9 18:39:22.454070 kubelet[1402]: E0209 18:39:22.454045 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:22.456629 kubelet[1402]: I0209 18:39:22.456585 1402 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:39:22.456721 kubelet[1402]: I0209 18:39:22.456655 1402 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:39:22.460875 systemd[1]: Created slice kubepods-besteffort-pod33d302c8_5a73_438d_b6f5_c2325f24ddd7.slice. Feb 9 18:39:22.471917 systemd[1]: Created slice kubepods-burstable-podf6f2869a_c5f3_4fd4_9da1_14ab9e17af72.slice. Feb 9 18:39:22.548867 kubelet[1402]: I0209 18:39:22.548832 1402 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 18:39:22.631847 kubelet[1402]: I0209 18:39:22.631820 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-cilium-run\") pod \"cilium-qslrd\" (UID: \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\") " pod="kube-system/cilium-qslrd" Feb 9 18:39:22.631981 kubelet[1402]: I0209 18:39:22.631858 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-cilium-cgroup\") pod \"cilium-qslrd\" (UID: \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\") " pod="kube-system/cilium-qslrd" Feb 9 18:39:22.631981 kubelet[1402]: I0209 18:39:22.631882 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-cni-path\") pod \"cilium-qslrd\" (UID: \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\") " pod="kube-system/cilium-qslrd" Feb 9 18:39:22.631981 kubelet[1402]: I0209 18:39:22.631900 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-etc-cni-netd\") pod \"cilium-qslrd\" (UID: \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\") " pod="kube-system/cilium-qslrd" Feb 9 18:39:22.631981 kubelet[1402]: I0209 18:39:22.631954 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-cilium-config-path\") pod \"cilium-qslrd\" (UID: \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\") " pod="kube-system/cilium-qslrd" Feb 9 18:39:22.631981 kubelet[1402]: I0209 18:39:22.631976 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/33d302c8-5a73-438d-b6f5-c2325f24ddd7-kube-proxy\") pod \"kube-proxy-zzt8m\" (UID: \"33d302c8-5a73-438d-b6f5-c2325f24ddd7\") " pod="kube-system/kube-proxy-zzt8m" Feb 9 18:39:22.632096 kubelet[1402]: I0209 18:39:22.631997 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33d302c8-5a73-438d-b6f5-c2325f24ddd7-xtables-lock\") pod \"kube-proxy-zzt8m\" (UID: \"33d302c8-5a73-438d-b6f5-c2325f24ddd7\") " pod="kube-system/kube-proxy-zzt8m" Feb 9 18:39:22.632096 kubelet[1402]: I0209 18:39:22.632033 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-lib-modules\") pod \"cilium-qslrd\" (UID: \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\") " pod="kube-system/cilium-qslrd" Feb 9 18:39:22.632096 kubelet[1402]: I0209 18:39:22.632054 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-clustermesh-secrets\") pod \"cilium-qslrd\" (UID: \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\") " pod="kube-system/cilium-qslrd" Feb 9 18:39:22.632160 kubelet[1402]: I0209 18:39:22.632075 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp87q\" (UniqueName: \"kubernetes.io/projected/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-kube-api-access-lp87q\") pod \"cilium-qslrd\" (UID: \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\") " pod="kube-system/cilium-qslrd" Feb 9 18:39:22.632160 kubelet[1402]: I0209 18:39:22.632136 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lk9d\" (UniqueName: \"kubernetes.io/projected/33d302c8-5a73-438d-b6f5-c2325f24ddd7-kube-api-access-8lk9d\") pod \"kube-proxy-zzt8m\" (UID: \"33d302c8-5a73-438d-b6f5-c2325f24ddd7\") " pod="kube-system/kube-proxy-zzt8m" Feb 9 18:39:22.632160 kubelet[1402]: I0209 18:39:22.632155 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-hostproc\") pod \"cilium-qslrd\" (UID: \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\") " pod="kube-system/cilium-qslrd" Feb 9 18:39:22.632223 kubelet[1402]: I0209 18:39:22.632197 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-host-proc-sys-kernel\") pod \"cilium-qslrd\" (UID: \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\") " pod="kube-system/cilium-qslrd" Feb 9 18:39:22.632223 kubelet[1402]: I0209 18:39:22.632220 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-xtables-lock\") pod \"cilium-qslrd\" (UID: \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\") " pod="kube-system/cilium-qslrd" Feb 9 18:39:22.632273 kubelet[1402]: I0209 18:39:22.632239 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-host-proc-sys-net\") pod \"cilium-qslrd\" (UID: \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\") " pod="kube-system/cilium-qslrd" Feb 9 18:39:22.632273 kubelet[1402]: I0209 18:39:22.632269 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-hubble-tls\") pod \"cilium-qslrd\" (UID: \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\") " pod="kube-system/cilium-qslrd" Feb 9 18:39:22.632314 kubelet[1402]: I0209 18:39:22.632289 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33d302c8-5a73-438d-b6f5-c2325f24ddd7-lib-modules\") pod \"kube-proxy-zzt8m\" (UID: \"33d302c8-5a73-438d-b6f5-c2325f24ddd7\") " pod="kube-system/kube-proxy-zzt8m" Feb 9 18:39:22.632314 kubelet[1402]: I0209 18:39:22.632307 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-bpf-maps\") pod \"cilium-qslrd\" (UID: \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\") " pod="kube-system/cilium-qslrd" Feb 9 18:39:22.632358 kubelet[1402]: I0209 18:39:22.632320 1402 reconciler.go:41] "Reconciler: start to sync state" Feb 9 18:39:22.770161 kubelet[1402]: E0209 18:39:22.770073 1402 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:39:22.770923 env[1142]: time="2024-02-09T18:39:22.770863856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zzt8m,Uid:33d302c8-5a73-438d-b6f5-c2325f24ddd7,Namespace:kube-system,Attempt:0,}" Feb 9 18:39:23.082935 kubelet[1402]: E0209 18:39:23.082903 1402 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:39:23.083394 env[1142]: time="2024-02-09T18:39:23.083336265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qslrd,Uid:f6f2869a-c5f3-4fd4-9da1-14ab9e17af72,Namespace:kube-system,Attempt:0,}" Feb 9 18:39:23.454536 kubelet[1402]: E0209 18:39:23.454423 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:23.466717 env[1142]: time="2024-02-09T18:39:23.466664419Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:23.468250 env[1142]: time="2024-02-09T18:39:23.468214786Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:23.470132 env[1142]: time="2024-02-09T18:39:23.470099269Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:23.471673 env[1142]: time="2024-02-09T18:39:23.471639887Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:23.474787 env[1142]: time="2024-02-09T18:39:23.474752286Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:23.476264 env[1142]: time="2024-02-09T18:39:23.476228982Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:23.476937 env[1142]: time="2024-02-09T18:39:23.476909559Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:23.480119 env[1142]: time="2024-02-09T18:39:23.480082685Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:23.515288 env[1142]: time="2024-02-09T18:39:23.515205124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:39:23.515409 env[1142]: time="2024-02-09T18:39:23.515289262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:39:23.515409 env[1142]: time="2024-02-09T18:39:23.515301527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:39:23.515544 env[1142]: time="2024-02-09T18:39:23.515507958Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8f057ab36382580cf0e29ebdaf907aa1fad475e5e228e84b5d9981ef37a151b2 pid=1505 runtime=io.containerd.runc.v2 Feb 9 18:39:23.515867 env[1142]: time="2024-02-09T18:39:23.515816225Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:39:23.515977 env[1142]: time="2024-02-09T18:39:23.515945589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:39:23.516057 env[1142]: time="2024-02-09T18:39:23.515966284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:39:23.516311 env[1142]: time="2024-02-09T18:39:23.516254016Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/57014abd55218f6a3c1f4a38eb5b0e6d44af46c219a28e244ccb3dfa5842953c pid=1504 runtime=io.containerd.runc.v2 Feb 9 18:39:23.542220 systemd[1]: Started cri-containerd-57014abd55218f6a3c1f4a38eb5b0e6d44af46c219a28e244ccb3dfa5842953c.scope. Feb 9 18:39:23.544518 systemd[1]: Started cri-containerd-8f057ab36382580cf0e29ebdaf907aa1fad475e5e228e84b5d9981ef37a151b2.scope. Feb 9 18:39:23.585380 env[1142]: time="2024-02-09T18:39:23.585329307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qslrd,Uid:f6f2869a-c5f3-4fd4-9da1-14ab9e17af72,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f057ab36382580cf0e29ebdaf907aa1fad475e5e228e84b5d9981ef37a151b2\"" Feb 9 18:39:23.586808 kubelet[1402]: E0209 18:39:23.586780 1402 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:39:23.588056 env[1142]: time="2024-02-09T18:39:23.587999640Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 18:39:23.588215 env[1142]: time="2024-02-09T18:39:23.588185616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zzt8m,Uid:33d302c8-5a73-438d-b6f5-c2325f24ddd7,Namespace:kube-system,Attempt:0,} returns sandbox id \"57014abd55218f6a3c1f4a38eb5b0e6d44af46c219a28e244ccb3dfa5842953c\"" Feb 9 18:39:23.588704 kubelet[1402]: E0209 18:39:23.588687 1402 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:39:23.739457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount114999056.mount: Deactivated successfully. Feb 9 18:39:24.442823 kubelet[1402]: E0209 18:39:24.442775 1402 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:24.454902 kubelet[1402]: E0209 18:39:24.454867 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:25.455663 kubelet[1402]: E0209 18:39:25.455620 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:26.455994 kubelet[1402]: E0209 18:39:26.455946 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:27.018776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount422508796.mount: Deactivated successfully. Feb 9 18:39:27.456477 kubelet[1402]: E0209 18:39:27.456434 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:28.457231 kubelet[1402]: E0209 18:39:28.457190 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:29.264907 env[1142]: time="2024-02-09T18:39:29.264853909Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:29.266193 env[1142]: time="2024-02-09T18:39:29.266161982Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:29.268052 env[1142]: time="2024-02-09T18:39:29.268023937Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:29.268625 env[1142]: time="2024-02-09T18:39:29.268596702Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 9 18:39:29.269528 env[1142]: time="2024-02-09T18:39:29.269501076Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 18:39:29.270876 env[1142]: time="2024-02-09T18:39:29.270844954Z" level=info msg="CreateContainer within sandbox \"8f057ab36382580cf0e29ebdaf907aa1fad475e5e228e84b5d9981ef37a151b2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 18:39:29.283704 env[1142]: time="2024-02-09T18:39:29.283670049Z" level=info msg="CreateContainer within sandbox \"8f057ab36382580cf0e29ebdaf907aa1fad475e5e228e84b5d9981ef37a151b2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d7377efe998f2a0041d5498bc7111cc37a6685fcb2d7ee5a04ec04309811b265\"" Feb 9 18:39:29.284242 env[1142]: time="2024-02-09T18:39:29.284216610Z" level=info msg="StartContainer for \"d7377efe998f2a0041d5498bc7111cc37a6685fcb2d7ee5a04ec04309811b265\"" Feb 9 18:39:29.300734 systemd[1]: Started cri-containerd-d7377efe998f2a0041d5498bc7111cc37a6685fcb2d7ee5a04ec04309811b265.scope. Feb 9 18:39:29.346587 env[1142]: time="2024-02-09T18:39:29.346534459Z" level=info msg="StartContainer for \"d7377efe998f2a0041d5498bc7111cc37a6685fcb2d7ee5a04ec04309811b265\" returns successfully" Feb 9 18:39:29.378760 systemd[1]: cri-containerd-d7377efe998f2a0041d5498bc7111cc37a6685fcb2d7ee5a04ec04309811b265.scope: Deactivated successfully. Feb 9 18:39:29.458305 kubelet[1402]: E0209 18:39:29.458268 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:29.478758 env[1142]: time="2024-02-09T18:39:29.478713110Z" level=info msg="shim disconnected" id=d7377efe998f2a0041d5498bc7111cc37a6685fcb2d7ee5a04ec04309811b265 Feb 9 18:39:29.478758 env[1142]: time="2024-02-09T18:39:29.478758677Z" level=warning msg="cleaning up after shim disconnected" id=d7377efe998f2a0041d5498bc7111cc37a6685fcb2d7ee5a04ec04309811b265 namespace=k8s.io Feb 9 18:39:29.478980 env[1142]: time="2024-02-09T18:39:29.478767839Z" level=info msg="cleaning up dead shim" Feb 9 18:39:29.486306 env[1142]: time="2024-02-09T18:39:29.486273028Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:39:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1620 runtime=io.containerd.runc.v2\n" Feb 9 18:39:29.651144 kubelet[1402]: E0209 18:39:29.651095 1402 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:39:29.652907 env[1142]: time="2024-02-09T18:39:29.652867165Z" level=info msg="CreateContainer within sandbox \"8f057ab36382580cf0e29ebdaf907aa1fad475e5e228e84b5d9981ef37a151b2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 18:39:29.664682 env[1142]: time="2024-02-09T18:39:29.664640144Z" level=info msg="CreateContainer within sandbox \"8f057ab36382580cf0e29ebdaf907aa1fad475e5e228e84b5d9981ef37a151b2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b52661c13a93763bdf40a0f4a286eae2788c78ca05f584ebb86ae99275f051b7\"" Feb 9 18:39:29.665551 env[1142]: time="2024-02-09T18:39:29.665520955Z" level=info msg="StartContainer for \"b52661c13a93763bdf40a0f4a286eae2788c78ca05f584ebb86ae99275f051b7\"" Feb 9 18:39:29.678431 systemd[1]: Started cri-containerd-b52661c13a93763bdf40a0f4a286eae2788c78ca05f584ebb86ae99275f051b7.scope. Feb 9 18:39:29.711219 env[1142]: time="2024-02-09T18:39:29.711177741Z" level=info msg="StartContainer for \"b52661c13a93763bdf40a0f4a286eae2788c78ca05f584ebb86ae99275f051b7\" returns successfully" Feb 9 18:39:29.724219 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 18:39:29.724408 systemd[1]: Stopped systemd-sysctl.service. Feb 9 18:39:29.724610 systemd[1]: Stopping systemd-sysctl.service... Feb 9 18:39:29.726134 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:39:29.726370 systemd[1]: cri-containerd-b52661c13a93763bdf40a0f4a286eae2788c78ca05f584ebb86ae99275f051b7.scope: Deactivated successfully. Feb 9 18:39:29.733535 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:39:29.749792 env[1142]: time="2024-02-09T18:39:29.749748041Z" level=info msg="shim disconnected" id=b52661c13a93763bdf40a0f4a286eae2788c78ca05f584ebb86ae99275f051b7 Feb 9 18:39:29.749971 env[1142]: time="2024-02-09T18:39:29.749794408Z" level=warning msg="cleaning up after shim disconnected" id=b52661c13a93763bdf40a0f4a286eae2788c78ca05f584ebb86ae99275f051b7 namespace=k8s.io Feb 9 18:39:29.749971 env[1142]: time="2024-02-09T18:39:29.749804209Z" level=info msg="cleaning up dead shim" Feb 9 18:39:29.759197 env[1142]: time="2024-02-09T18:39:29.759161032Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:39:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1684 runtime=io.containerd.runc.v2\n" Feb 9 18:39:30.280266 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7377efe998f2a0041d5498bc7111cc37a6685fcb2d7ee5a04ec04309811b265-rootfs.mount: Deactivated successfully. Feb 9 18:39:30.429611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2793396604.mount: Deactivated successfully. Feb 9 18:39:30.458436 kubelet[1402]: E0209 18:39:30.458370 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:30.653079 kubelet[1402]: E0209 18:39:30.653029 1402 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:39:30.654997 env[1142]: time="2024-02-09T18:39:30.654956522Z" level=info msg="CreateContainer within sandbox \"8f057ab36382580cf0e29ebdaf907aa1fad475e5e228e84b5d9981ef37a151b2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 18:39:30.672243 env[1142]: time="2024-02-09T18:39:30.672203969Z" level=info msg="CreateContainer within sandbox \"8f057ab36382580cf0e29ebdaf907aa1fad475e5e228e84b5d9981ef37a151b2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a3cf4bde0d7f2cae9fc98e204c654e8405d5c1b1b4bd34143fdfbfcbb3a11203\"" Feb 9 18:39:30.672866 env[1142]: time="2024-02-09T18:39:30.672814574Z" level=info msg="StartContainer for \"a3cf4bde0d7f2cae9fc98e204c654e8405d5c1b1b4bd34143fdfbfcbb3a11203\"" Feb 9 18:39:30.686867 systemd[1]: Started cri-containerd-a3cf4bde0d7f2cae9fc98e204c654e8405d5c1b1b4bd34143fdfbfcbb3a11203.scope. Feb 9 18:39:30.734181 env[1142]: time="2024-02-09T18:39:30.734125171Z" level=info msg="StartContainer for \"a3cf4bde0d7f2cae9fc98e204c654e8405d5c1b1b4bd34143fdfbfcbb3a11203\" returns successfully" Feb 9 18:39:30.734310 systemd[1]: cri-containerd-a3cf4bde0d7f2cae9fc98e204c654e8405d5c1b1b4bd34143fdfbfcbb3a11203.scope: Deactivated successfully. Feb 9 18:39:30.839341 env[1142]: time="2024-02-09T18:39:30.839295408Z" level=info msg="shim disconnected" id=a3cf4bde0d7f2cae9fc98e204c654e8405d5c1b1b4bd34143fdfbfcbb3a11203 Feb 9 18:39:30.839650 env[1142]: time="2024-02-09T18:39:30.839629135Z" level=warning msg="cleaning up after shim disconnected" id=a3cf4bde0d7f2cae9fc98e204c654e8405d5c1b1b4bd34143fdfbfcbb3a11203 namespace=k8s.io Feb 9 18:39:30.839797 env[1142]: time="2024-02-09T18:39:30.839761713Z" level=info msg="cleaning up dead shim" Feb 9 18:39:30.842242 env[1142]: time="2024-02-09T18:39:30.842208095Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:30.844210 env[1142]: time="2024-02-09T18:39:30.844177010Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:30.845621 env[1142]: time="2024-02-09T18:39:30.845595167Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:30.846968 env[1142]: time="2024-02-09T18:39:30.846935795Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:39:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1740 runtime=io.containerd.runc.v2\n" Feb 9 18:39:30.847298 env[1142]: time="2024-02-09T18:39:30.847256559Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926\"" Feb 9 18:39:30.847369 env[1142]: time="2024-02-09T18:39:30.846990882Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:30.849677 env[1142]: time="2024-02-09T18:39:30.849647973Z" level=info msg="CreateContainer within sandbox \"57014abd55218f6a3c1f4a38eb5b0e6d44af46c219a28e244ccb3dfa5842953c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 18:39:30.862003 env[1142]: time="2024-02-09T18:39:30.861963692Z" level=info msg="CreateContainer within sandbox \"57014abd55218f6a3c1f4a38eb5b0e6d44af46c219a28e244ccb3dfa5842953c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5e90645cf6ff0e1e5ad8fce2c4c9046f25ceb7672e45dbd5d3166893ce9321bb\"" Feb 9 18:39:30.862706 env[1142]: time="2024-02-09T18:39:30.862641747Z" level=info msg="StartContainer for \"5e90645cf6ff0e1e5ad8fce2c4c9046f25ceb7672e45dbd5d3166893ce9321bb\"" Feb 9 18:39:30.876733 systemd[1]: Started cri-containerd-5e90645cf6ff0e1e5ad8fce2c4c9046f25ceb7672e45dbd5d3166893ce9321bb.scope. Feb 9 18:39:30.922682 env[1142]: time="2024-02-09T18:39:30.922047517Z" level=info msg="StartContainer for \"5e90645cf6ff0e1e5ad8fce2c4c9046f25ceb7672e45dbd5d3166893ce9321bb\" returns successfully" Feb 9 18:39:31.280367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2476497458.mount: Deactivated successfully. Feb 9 18:39:31.458839 kubelet[1402]: E0209 18:39:31.458795 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:31.656365 kubelet[1402]: E0209 18:39:31.656302 1402 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:39:31.658751 kubelet[1402]: E0209 18:39:31.658710 1402 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:39:31.660803 env[1142]: time="2024-02-09T18:39:31.660754009Z" level=info msg="CreateContainer within sandbox \"8f057ab36382580cf0e29ebdaf907aa1fad475e5e228e84b5d9981ef37a151b2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 18:39:31.664428 kubelet[1402]: I0209 18:39:31.664387 1402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-zzt8m" podStartSLOduration=-9.223372022190424e+09 pod.CreationTimestamp="2024-02-09 18:39:17 +0000 UTC" firstStartedPulling="2024-02-09 18:39:23.588998673 +0000 UTC m=+20.253005526" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:39:31.664186502 +0000 UTC m=+28.328193355" watchObservedRunningTime="2024-02-09 18:39:31.664352404 +0000 UTC m=+28.328359257" Feb 9 18:39:31.676513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount615414105.mount: Deactivated successfully. Feb 9 18:39:31.678036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount523504830.mount: Deactivated successfully. Feb 9 18:39:31.680054 env[1142]: time="2024-02-09T18:39:31.680011509Z" level=info msg="CreateContainer within sandbox \"8f057ab36382580cf0e29ebdaf907aa1fad475e5e228e84b5d9981ef37a151b2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0c4280a0336efc0c2cc6fbffca378db809c533d0f9bbac03bd6c9094068e1a9b\"" Feb 9 18:39:31.680612 env[1142]: time="2024-02-09T18:39:31.680561301Z" level=info msg="StartContainer for \"0c4280a0336efc0c2cc6fbffca378db809c533d0f9bbac03bd6c9094068e1a9b\"" Feb 9 18:39:31.695670 systemd[1]: Started cri-containerd-0c4280a0336efc0c2cc6fbffca378db809c533d0f9bbac03bd6c9094068e1a9b.scope. Feb 9 18:39:31.724661 systemd[1]: cri-containerd-0c4280a0336efc0c2cc6fbffca378db809c533d0f9bbac03bd6c9094068e1a9b.scope: Deactivated successfully. Feb 9 18:39:31.725793 env[1142]: time="2024-02-09T18:39:31.725625644Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6f2869a_c5f3_4fd4_9da1_14ab9e17af72.slice/cri-containerd-0c4280a0336efc0c2cc6fbffca378db809c533d0f9bbac03bd6c9094068e1a9b.scope/memory.events\": no such file or directory" Feb 9 18:39:31.727224 env[1142]: time="2024-02-09T18:39:31.727187690Z" level=info msg="StartContainer for \"0c4280a0336efc0c2cc6fbffca378db809c533d0f9bbac03bd6c9094068e1a9b\" returns successfully" Feb 9 18:39:31.745958 env[1142]: time="2024-02-09T18:39:31.745915119Z" level=info msg="shim disconnected" id=0c4280a0336efc0c2cc6fbffca378db809c533d0f9bbac03bd6c9094068e1a9b Feb 9 18:39:31.745958 env[1142]: time="2024-02-09T18:39:31.745957445Z" level=warning msg="cleaning up after shim disconnected" id=0c4280a0336efc0c2cc6fbffca378db809c533d0f9bbac03bd6c9094068e1a9b namespace=k8s.io Feb 9 18:39:31.746184 env[1142]: time="2024-02-09T18:39:31.745967126Z" level=info msg="cleaning up dead shim" Feb 9 18:39:31.752521 env[1142]: time="2024-02-09T18:39:31.752490386Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:39:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1943 runtime=io.containerd.runc.v2\n" Feb 9 18:39:32.279679 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c4280a0336efc0c2cc6fbffca378db809c533d0f9bbac03bd6c9094068e1a9b-rootfs.mount: Deactivated successfully. Feb 9 18:39:32.459914 kubelet[1402]: E0209 18:39:32.459883 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:32.662639 kubelet[1402]: E0209 18:39:32.662400 1402 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:39:32.663182 kubelet[1402]: E0209 18:39:32.663164 1402 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:39:32.665766 env[1142]: time="2024-02-09T18:39:32.665667882Z" level=info msg="CreateContainer within sandbox \"8f057ab36382580cf0e29ebdaf907aa1fad475e5e228e84b5d9981ef37a151b2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 18:39:32.681408 env[1142]: time="2024-02-09T18:39:32.681373800Z" level=info msg="CreateContainer within sandbox \"8f057ab36382580cf0e29ebdaf907aa1fad475e5e228e84b5d9981ef37a151b2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"221aac96c0dae6d2ef8d86f849e596fd5ddf789700e40841d043ee86c5680681\"" Feb 9 18:39:32.682064 env[1142]: time="2024-02-09T18:39:32.681998358Z" level=info msg="StartContainer for \"221aac96c0dae6d2ef8d86f849e596fd5ddf789700e40841d043ee86c5680681\"" Feb 9 18:39:32.698356 systemd[1]: Started cri-containerd-221aac96c0dae6d2ef8d86f849e596fd5ddf789700e40841d043ee86c5680681.scope. Feb 9 18:39:32.733708 env[1142]: time="2024-02-09T18:39:32.733666878Z" level=info msg="StartContainer for \"221aac96c0dae6d2ef8d86f849e596fd5ddf789700e40841d043ee86c5680681\" returns successfully" Feb 9 18:39:32.885718 kubelet[1402]: I0209 18:39:32.885688 1402 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 18:39:32.973460 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 18:39:33.204437 kernel: Initializing XFRM netlink socket Feb 9 18:39:33.206441 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 18:39:33.460487 kubelet[1402]: E0209 18:39:33.460331 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:33.666386 kubelet[1402]: E0209 18:39:33.666337 1402 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:39:33.683721 kubelet[1402]: I0209 18:39:33.683682 1402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-qslrd" podStartSLOduration=-9.22337202017114e+09 pod.CreationTimestamp="2024-02-09 18:39:17 +0000 UTC" firstStartedPulling="2024-02-09 18:39:23.587563208 +0000 UTC m=+20.251570061" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:39:33.679237712 +0000 UTC m=+30.343244565" watchObservedRunningTime="2024-02-09 18:39:33.683635671 +0000 UTC m=+30.347642524" Feb 9 18:39:33.952941 kubelet[1402]: I0209 18:39:33.952893 1402 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:39:33.957595 systemd[1]: Created slice kubepods-besteffort-pod0c5926da_6581_4b04_8439_eddd86416b91.slice. Feb 9 18:39:34.088144 kubelet[1402]: I0209 18:39:34.088099 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp5zr\" (UniqueName: \"kubernetes.io/projected/0c5926da-6581-4b04-8439-eddd86416b91-kube-api-access-cp5zr\") pod \"nginx-deployment-8ffc5cf85-g6lzg\" (UID: \"0c5926da-6581-4b04-8439-eddd86416b91\") " pod="default/nginx-deployment-8ffc5cf85-g6lzg" Feb 9 18:39:34.260729 env[1142]: time="2024-02-09T18:39:34.260368916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-g6lzg,Uid:0c5926da-6581-4b04-8439-eddd86416b91,Namespace:default,Attempt:0,}" Feb 9 18:39:34.461371 kubelet[1402]: E0209 18:39:34.461319 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:34.667500 kubelet[1402]: E0209 18:39:34.667471 1402 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:39:34.812552 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 18:39:34.812636 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 18:39:34.812051 systemd-networkd[1040]: cilium_host: Link UP Feb 9 18:39:34.812175 systemd-networkd[1040]: cilium_net: Link UP Feb 9 18:39:34.812302 systemd-networkd[1040]: cilium_net: Gained carrier Feb 9 18:39:34.812427 systemd-networkd[1040]: cilium_host: Gained carrier Feb 9 18:39:34.887848 systemd-networkd[1040]: cilium_vxlan: Link UP Feb 9 18:39:34.887855 systemd-networkd[1040]: cilium_vxlan: Gained carrier Feb 9 18:39:34.951593 systemd-networkd[1040]: cilium_host: Gained IPv6LL Feb 9 18:39:35.007526 systemd-networkd[1040]: cilium_net: Gained IPv6LL Feb 9 18:39:35.170473 kernel: NET: Registered PF_ALG protocol family Feb 9 18:39:35.462134 kubelet[1402]: E0209 18:39:35.462064 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:35.669067 kubelet[1402]: E0209 18:39:35.668867 1402 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:39:35.712735 systemd-networkd[1040]: lxc_health: Link UP Feb 9 18:39:35.726300 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 18:39:35.724751 systemd-networkd[1040]: lxc_health: Gained carrier Feb 9 18:39:35.951562 systemd-networkd[1040]: cilium_vxlan: Gained IPv6LL Feb 9 18:39:36.299085 systemd-networkd[1040]: lxcdbec6ae99e84: Link UP Feb 9 18:39:36.315447 kernel: eth0: renamed from tmpa365c Feb 9 18:39:36.321353 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 18:39:36.321435 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcdbec6ae99e84: link becomes ready Feb 9 18:39:36.321966 systemd-networkd[1040]: lxcdbec6ae99e84: Gained carrier Feb 9 18:39:36.463268 kubelet[1402]: E0209 18:39:36.463194 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:36.670619 kubelet[1402]: E0209 18:39:36.670593 1402 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:39:36.911568 systemd-networkd[1040]: lxc_health: Gained IPv6LL Feb 9 18:39:37.463869 kubelet[1402]: E0209 18:39:37.463831 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:37.671886 kubelet[1402]: E0209 18:39:37.671856 1402 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:39:38.063592 systemd-networkd[1040]: lxcdbec6ae99e84: Gained IPv6LL Feb 9 18:39:38.465107 kubelet[1402]: E0209 18:39:38.465071 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:38.673593 kubelet[1402]: E0209 18:39:38.673561 1402 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:39:39.465771 kubelet[1402]: E0209 18:39:39.465740 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:39.828498 env[1142]: time="2024-02-09T18:39:39.828427064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:39:39.828498 env[1142]: time="2024-02-09T18:39:39.828468948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:39:39.828860 env[1142]: time="2024-02-09T18:39:39.828479829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:39:39.829102 env[1142]: time="2024-02-09T18:39:39.829063038Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a365ca81cbc56a8f1755058474d001ca0b9cbbf7b10780853e71023a28d5d7ef pid=2493 runtime=io.containerd.runc.v2 Feb 9 18:39:39.844246 systemd[1]: Started cri-containerd-a365ca81cbc56a8f1755058474d001ca0b9cbbf7b10780853e71023a28d5d7ef.scope. Feb 9 18:39:39.911777 systemd-resolved[1084]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 18:39:39.929100 env[1142]: time="2024-02-09T18:39:39.929063570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-g6lzg,Uid:0c5926da-6581-4b04-8439-eddd86416b91,Namespace:default,Attempt:0,} returns sandbox id \"a365ca81cbc56a8f1755058474d001ca0b9cbbf7b10780853e71023a28d5d7ef\"" Feb 9 18:39:39.930679 env[1142]: time="2024-02-09T18:39:39.930650865Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 18:39:40.467194 kubelet[1402]: E0209 18:39:40.467155 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:41.468098 kubelet[1402]: E0209 18:39:41.468058 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:41.986704 update_engine[1132]: I0209 18:39:41.986646 1132 update_attempter.cc:509] Updating boot flags... Feb 9 18:39:42.096616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount786729704.mount: Deactivated successfully. Feb 9 18:39:42.468201 kubelet[1402]: E0209 18:39:42.468150 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:42.817259 env[1142]: time="2024-02-09T18:39:42.816940724Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:42.818817 env[1142]: time="2024-02-09T18:39:42.818786980Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:42.820449 env[1142]: time="2024-02-09T18:39:42.820420459Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:42.822270 env[1142]: time="2024-02-09T18:39:42.822245273Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:42.822843 env[1142]: time="2024-02-09T18:39:42.822813394Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\"" Feb 9 18:39:42.824957 env[1142]: time="2024-02-09T18:39:42.824926789Z" level=info msg="CreateContainer within sandbox \"a365ca81cbc56a8f1755058474d001ca0b9cbbf7b10780853e71023a28d5d7ef\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 9 18:39:42.833719 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2614752074.mount: Deactivated successfully. Feb 9 18:39:42.870264 env[1142]: time="2024-02-09T18:39:42.870220464Z" level=info msg="CreateContainer within sandbox \"a365ca81cbc56a8f1755058474d001ca0b9cbbf7b10780853e71023a28d5d7ef\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"81468776619187dde7d02524e0529849e05454db4162cd19bfb1bad6c1e71764\"" Feb 9 18:39:42.871669 env[1142]: time="2024-02-09T18:39:42.870854430Z" level=info msg="StartContainer for \"81468776619187dde7d02524e0529849e05454db4162cd19bfb1bad6c1e71764\"" Feb 9 18:39:42.879303 kernel: hrtimer: interrupt took 19915138 ns Feb 9 18:39:42.887584 systemd[1]: Started cri-containerd-81468776619187dde7d02524e0529849e05454db4162cd19bfb1bad6c1e71764.scope. Feb 9 18:39:42.927945 env[1142]: time="2024-02-09T18:39:42.927892684Z" level=info msg="StartContainer for \"81468776619187dde7d02524e0529849e05454db4162cd19bfb1bad6c1e71764\" returns successfully" Feb 9 18:39:43.469277 kubelet[1402]: E0209 18:39:43.469224 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:43.694585 kubelet[1402]: I0209 18:39:43.694545 1402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-8ffc5cf85-g6lzg" podStartSLOduration=-9.223372026160263e+09 pod.CreationTimestamp="2024-02-09 18:39:33 +0000 UTC" firstStartedPulling="2024-02-09 18:39:39.930183465 +0000 UTC m=+36.594190278" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:39:43.694310354 +0000 UTC m=+40.358317167" watchObservedRunningTime="2024-02-09 18:39:43.694513088 +0000 UTC m=+40.358519901" Feb 9 18:39:44.442778 kubelet[1402]: E0209 18:39:44.442745 1402 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:44.470015 kubelet[1402]: E0209 18:39:44.469982 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:45.471051 kubelet[1402]: E0209 18:39:45.471008 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:45.548375 kubelet[1402]: I0209 18:39:45.548342 1402 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:39:45.552958 systemd[1]: Created slice kubepods-besteffort-pod34ad4da6_24ae_44db_b3b0_3a273ead13be.slice. Feb 9 18:39:45.648391 kubelet[1402]: I0209 18:39:45.648349 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/34ad4da6-24ae-44db-b3b0-3a273ead13be-data\") pod \"nfs-server-provisioner-0\" (UID: \"34ad4da6-24ae-44db-b3b0-3a273ead13be\") " pod="default/nfs-server-provisioner-0" Feb 9 18:39:45.648562 kubelet[1402]: I0209 18:39:45.648477 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8npn\" (UniqueName: \"kubernetes.io/projected/34ad4da6-24ae-44db-b3b0-3a273ead13be-kube-api-access-p8npn\") pod \"nfs-server-provisioner-0\" (UID: \"34ad4da6-24ae-44db-b3b0-3a273ead13be\") " pod="default/nfs-server-provisioner-0" Feb 9 18:39:45.856471 env[1142]: time="2024-02-09T18:39:45.856433869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:34ad4da6-24ae-44db-b3b0-3a273ead13be,Namespace:default,Attempt:0,}" Feb 9 18:39:45.883737 systemd-networkd[1040]: lxcee68c66abd4b: Link UP Feb 9 18:39:45.896670 kernel: eth0: renamed from tmpb426b Feb 9 18:39:45.908462 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 18:39:45.908544 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcee68c66abd4b: link becomes ready Feb 9 18:39:45.908228 systemd-networkd[1040]: lxcee68c66abd4b: Gained carrier Feb 9 18:39:46.141023 env[1142]: time="2024-02-09T18:39:46.140028263Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:39:46.141023 env[1142]: time="2024-02-09T18:39:46.140065625Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:39:46.141023 env[1142]: time="2024-02-09T18:39:46.140075866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:39:46.141023 env[1142]: time="2024-02-09T18:39:46.140237635Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b426bd6c1df2f64c14bc393975b4bf8abc7aedb1bd5049005b9f100816c07bb1 pid=2686 runtime=io.containerd.runc.v2 Feb 9 18:39:46.154276 systemd[1]: Started cri-containerd-b426bd6c1df2f64c14bc393975b4bf8abc7aedb1bd5049005b9f100816c07bb1.scope. Feb 9 18:39:46.184615 systemd-resolved[1084]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 18:39:46.201361 env[1142]: time="2024-02-09T18:39:46.201309396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:34ad4da6-24ae-44db-b3b0-3a273ead13be,Namespace:default,Attempt:0,} returns sandbox id \"b426bd6c1df2f64c14bc393975b4bf8abc7aedb1bd5049005b9f100816c07bb1\"" Feb 9 18:39:46.202996 env[1142]: time="2024-02-09T18:39:46.202911093Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 9 18:39:46.472469 kubelet[1402]: E0209 18:39:46.472033 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:47.151634 systemd-networkd[1040]: lxcee68c66abd4b: Gained IPv6LL Feb 9 18:39:47.472887 kubelet[1402]: E0209 18:39:47.472544 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:48.389576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2678319153.mount: Deactivated successfully. Feb 9 18:39:48.472826 kubelet[1402]: E0209 18:39:48.472775 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:49.473525 kubelet[1402]: E0209 18:39:49.473465 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:50.177845 env[1142]: time="2024-02-09T18:39:50.177767013Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:50.179428 env[1142]: time="2024-02-09T18:39:50.179378774Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:50.180916 env[1142]: time="2024-02-09T18:39:50.180884049Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:50.182767 env[1142]: time="2024-02-09T18:39:50.182739223Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:50.183444 env[1142]: time="2024-02-09T18:39:50.183405416Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Feb 9 18:39:50.185188 env[1142]: time="2024-02-09T18:39:50.185153904Z" level=info msg="CreateContainer within sandbox \"b426bd6c1df2f64c14bc393975b4bf8abc7aedb1bd5049005b9f100816c07bb1\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 9 18:39:50.194773 env[1142]: time="2024-02-09T18:39:50.194733866Z" level=info msg="CreateContainer within sandbox \"b426bd6c1df2f64c14bc393975b4bf8abc7aedb1bd5049005b9f100816c07bb1\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"8b3ff0a83695b5a26d10ccc9cb87a1f685b5c275db6f69757693de6fb0530a3b\"" Feb 9 18:39:50.195274 env[1142]: time="2024-02-09T18:39:50.195131326Z" level=info msg="StartContainer for \"8b3ff0a83695b5a26d10ccc9cb87a1f685b5c275db6f69757693de6fb0530a3b\"" Feb 9 18:39:50.214473 systemd[1]: Started cri-containerd-8b3ff0a83695b5a26d10ccc9cb87a1f685b5c275db6f69757693de6fb0530a3b.scope. Feb 9 18:39:50.250026 env[1142]: time="2024-02-09T18:39:50.249974844Z" level=info msg="StartContainer for \"8b3ff0a83695b5a26d10ccc9cb87a1f685b5c275db6f69757693de6fb0530a3b\" returns successfully" Feb 9 18:39:50.473697 kubelet[1402]: E0209 18:39:50.473586 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:50.707134 kubelet[1402]: I0209 18:39:50.707073 1402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=-9.223372031147738e+09 pod.CreationTimestamp="2024-02-09 18:39:45 +0000 UTC" firstStartedPulling="2024-02-09 18:39:46.202612195 +0000 UTC m=+42.866619048" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:39:50.706769938 +0000 UTC m=+47.370776791" watchObservedRunningTime="2024-02-09 18:39:50.707038032 +0000 UTC m=+47.371044845" Feb 9 18:39:51.474777 kubelet[1402]: E0209 18:39:51.474705 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:52.475310 kubelet[1402]: E0209 18:39:52.475245 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:53.475657 kubelet[1402]: E0209 18:39:53.475589 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:54.476161 kubelet[1402]: E0209 18:39:54.476090 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:55.476876 kubelet[1402]: E0209 18:39:55.476834 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:56.477932 kubelet[1402]: E0209 18:39:56.477893 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:57.479055 kubelet[1402]: E0209 18:39:57.479009 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:58.479330 kubelet[1402]: E0209 18:39:58.479292 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:59.479621 kubelet[1402]: E0209 18:39:59.479572 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:00.208775 kubelet[1402]: I0209 18:40:00.208729 1402 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:40:00.213206 systemd[1]: Created slice kubepods-besteffort-podb87a5e66_2e8d_4d73_bb82_79a1b74e0049.slice. Feb 9 18:40:00.323862 kubelet[1402]: I0209 18:40:00.323809 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-75aba929-1f20-4843-9010-249f04d56a37\" (UniqueName: \"kubernetes.io/nfs/b87a5e66-2e8d-4d73-bb82-79a1b74e0049-pvc-75aba929-1f20-4843-9010-249f04d56a37\") pod \"test-pod-1\" (UID: \"b87a5e66-2e8d-4d73-bb82-79a1b74e0049\") " pod="default/test-pod-1" Feb 9 18:40:00.323862 kubelet[1402]: I0209 18:40:00.323867 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sjdm\" (UniqueName: \"kubernetes.io/projected/b87a5e66-2e8d-4d73-bb82-79a1b74e0049-kube-api-access-5sjdm\") pod \"test-pod-1\" (UID: \"b87a5e66-2e8d-4d73-bb82-79a1b74e0049\") " pod="default/test-pod-1" Feb 9 18:40:00.450448 kernel: FS-Cache: Loaded Feb 9 18:40:00.473788 kernel: RPC: Registered named UNIX socket transport module. Feb 9 18:40:00.473880 kernel: RPC: Registered udp transport module. Feb 9 18:40:00.473903 kernel: RPC: Registered tcp transport module. Feb 9 18:40:00.473922 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 9 18:40:00.480271 kubelet[1402]: E0209 18:40:00.480241 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:00.507452 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 9 18:40:00.641448 kernel: NFS: Registering the id_resolver key type Feb 9 18:40:00.641579 kernel: Key type id_resolver registered Feb 9 18:40:00.641608 kernel: Key type id_legacy registered Feb 9 18:40:00.659240 nfsidmap[2829]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 9 18:40:00.662564 nfsidmap[2832]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 9 18:40:00.816214 env[1142]: time="2024-02-09T18:40:00.816109141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:b87a5e66-2e8d-4d73-bb82-79a1b74e0049,Namespace:default,Attempt:0,}" Feb 9 18:40:00.840895 systemd-networkd[1040]: lxcf74a6997c065: Link UP Feb 9 18:40:00.850444 kernel: eth0: renamed from tmp1db3d Feb 9 18:40:00.858595 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 18:40:00.858677 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf74a6997c065: link becomes ready Feb 9 18:40:00.858717 systemd-networkd[1040]: lxcf74a6997c065: Gained carrier Feb 9 18:40:01.085043 env[1142]: time="2024-02-09T18:40:01.084975758Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:40:01.085043 env[1142]: time="2024-02-09T18:40:01.085016080Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:40:01.085043 env[1142]: time="2024-02-09T18:40:01.085026680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:40:01.085258 env[1142]: time="2024-02-09T18:40:01.085156364Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1db3d796fb6952ef75957f76f6cb9a5eb48b86eccf9c2013397c4e79d18e765a pid=2868 runtime=io.containerd.runc.v2 Feb 9 18:40:01.099410 systemd[1]: Started cri-containerd-1db3d796fb6952ef75957f76f6cb9a5eb48b86eccf9c2013397c4e79d18e765a.scope. Feb 9 18:40:01.122467 systemd-resolved[1084]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 18:40:01.138604 env[1142]: time="2024-02-09T18:40:01.138559251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:b87a5e66-2e8d-4d73-bb82-79a1b74e0049,Namespace:default,Attempt:0,} returns sandbox id \"1db3d796fb6952ef75957f76f6cb9a5eb48b86eccf9c2013397c4e79d18e765a\"" Feb 9 18:40:01.139992 env[1142]: time="2024-02-09T18:40:01.139964977Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 18:40:01.416731 env[1142]: time="2024-02-09T18:40:01.416611967Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:01.418309 env[1142]: time="2024-02-09T18:40:01.418273982Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:01.419718 env[1142]: time="2024-02-09T18:40:01.419682869Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:01.421819 env[1142]: time="2024-02-09T18:40:01.421781498Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:01.422577 env[1142]: time="2024-02-09T18:40:01.422542604Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\"" Feb 9 18:40:01.424701 env[1142]: time="2024-02-09T18:40:01.424670674Z" level=info msg="CreateContainer within sandbox \"1db3d796fb6952ef75957f76f6cb9a5eb48b86eccf9c2013397c4e79d18e765a\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 9 18:40:01.435242 env[1142]: time="2024-02-09T18:40:01.435198582Z" level=info msg="CreateContainer within sandbox \"1db3d796fb6952ef75957f76f6cb9a5eb48b86eccf9c2013397c4e79d18e765a\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"68bec26fa16c56d6857d97ea3fd1873c123ba6a364243985b8e1123b16e77425\"" Feb 9 18:40:01.436592 env[1142]: time="2024-02-09T18:40:01.436555547Z" level=info msg="StartContainer for \"68bec26fa16c56d6857d97ea3fd1873c123ba6a364243985b8e1123b16e77425\"" Feb 9 18:40:01.455847 systemd[1]: Started cri-containerd-68bec26fa16c56d6857d97ea3fd1873c123ba6a364243985b8e1123b16e77425.scope. Feb 9 18:40:01.481055 kubelet[1402]: E0209 18:40:01.481002 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:01.484995 env[1142]: time="2024-02-09T18:40:01.484937507Z" level=info msg="StartContainer for \"68bec26fa16c56d6857d97ea3fd1873c123ba6a364243985b8e1123b16e77425\" returns successfully" Feb 9 18:40:01.723986 kubelet[1402]: I0209 18:40:01.723879 1402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=-9.22337202013093e+09 pod.CreationTimestamp="2024-02-09 18:39:45 +0000 UTC" firstStartedPulling="2024-02-09 18:40:01.139708369 +0000 UTC m=+57.803715222" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:40:01.723073824 +0000 UTC m=+58.387080677" watchObservedRunningTime="2024-02-09 18:40:01.72384553 +0000 UTC m=+58.387852383" Feb 9 18:40:01.999621 systemd-networkd[1040]: lxcf74a6997c065: Gained IPv6LL Feb 9 18:40:02.481632 kubelet[1402]: E0209 18:40:02.481598 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:03.483028 kubelet[1402]: E0209 18:40:03.482971 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:04.443105 kubelet[1402]: E0209 18:40:04.442961 1402 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:04.483364 kubelet[1402]: E0209 18:40:04.483062 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:05.484036 kubelet[1402]: E0209 18:40:05.483997 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:06.485442 kubelet[1402]: E0209 18:40:06.485396 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:07.486726 kubelet[1402]: E0209 18:40:07.486691 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:08.405588 env[1142]: time="2024-02-09T18:40:08.405488013Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 18:40:08.410289 env[1142]: time="2024-02-09T18:40:08.410256462Z" level=info msg="StopContainer for \"221aac96c0dae6d2ef8d86f849e596fd5ddf789700e40841d043ee86c5680681\" with timeout 1 (s)" Feb 9 18:40:08.410533 env[1142]: time="2024-02-09T18:40:08.410504589Z" level=info msg="Stop container \"221aac96c0dae6d2ef8d86f849e596fd5ddf789700e40841d043ee86c5680681\" with signal terminated" Feb 9 18:40:08.416370 systemd-networkd[1040]: lxc_health: Link DOWN Feb 9 18:40:08.416376 systemd-networkd[1040]: lxc_health: Lost carrier Feb 9 18:40:08.446032 systemd[1]: cri-containerd-221aac96c0dae6d2ef8d86f849e596fd5ddf789700e40841d043ee86c5680681.scope: Deactivated successfully. Feb 9 18:40:08.446328 systemd[1]: cri-containerd-221aac96c0dae6d2ef8d86f849e596fd5ddf789700e40841d043ee86c5680681.scope: Consumed 6.491s CPU time. Feb 9 18:40:08.461475 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-221aac96c0dae6d2ef8d86f849e596fd5ddf789700e40841d043ee86c5680681-rootfs.mount: Deactivated successfully. Feb 9 18:40:08.487780 kubelet[1402]: E0209 18:40:08.487738 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:08.582839 env[1142]: time="2024-02-09T18:40:08.582788884Z" level=info msg="shim disconnected" id=221aac96c0dae6d2ef8d86f849e596fd5ddf789700e40841d043ee86c5680681 Feb 9 18:40:08.582839 env[1142]: time="2024-02-09T18:40:08.582838286Z" level=warning msg="cleaning up after shim disconnected" id=221aac96c0dae6d2ef8d86f849e596fd5ddf789700e40841d043ee86c5680681 namespace=k8s.io Feb 9 18:40:08.583037 env[1142]: time="2024-02-09T18:40:08.582849966Z" level=info msg="cleaning up dead shim" Feb 9 18:40:08.589620 env[1142]: time="2024-02-09T18:40:08.589588548Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:40:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2999 runtime=io.containerd.runc.v2\n" Feb 9 18:40:08.592211 env[1142]: time="2024-02-09T18:40:08.592183498Z" level=info msg="StopContainer for \"221aac96c0dae6d2ef8d86f849e596fd5ddf789700e40841d043ee86c5680681\" returns successfully" Feb 9 18:40:08.592738 env[1142]: time="2024-02-09T18:40:08.592715713Z" level=info msg="StopPodSandbox for \"8f057ab36382580cf0e29ebdaf907aa1fad475e5e228e84b5d9981ef37a151b2\"" Feb 9 18:40:08.592803 env[1142]: time="2024-02-09T18:40:08.592768314Z" level=info msg="Container to stop \"d7377efe998f2a0041d5498bc7111cc37a6685fcb2d7ee5a04ec04309811b265\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:40:08.592803 env[1142]: time="2024-02-09T18:40:08.592785034Z" level=info msg="Container to stop \"b52661c13a93763bdf40a0f4a286eae2788c78ca05f584ebb86ae99275f051b7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:40:08.592803 env[1142]: time="2024-02-09T18:40:08.592796395Z" level=info msg="Container to stop \"a3cf4bde0d7f2cae9fc98e204c654e8405d5c1b1b4bd34143fdfbfcbb3a11203\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:40:08.594401 env[1142]: time="2024-02-09T18:40:08.592806675Z" level=info msg="Container to stop \"0c4280a0336efc0c2cc6fbffca378db809c533d0f9bbac03bd6c9094068e1a9b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:40:08.594401 env[1142]: time="2024-02-09T18:40:08.592816435Z" level=info msg="Container to stop \"221aac96c0dae6d2ef8d86f849e596fd5ddf789700e40841d043ee86c5680681\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:40:08.594124 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8f057ab36382580cf0e29ebdaf907aa1fad475e5e228e84b5d9981ef37a151b2-shm.mount: Deactivated successfully. Feb 9 18:40:08.599140 systemd[1]: cri-containerd-8f057ab36382580cf0e29ebdaf907aa1fad475e5e228e84b5d9981ef37a151b2.scope: Deactivated successfully. Feb 9 18:40:08.614717 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f057ab36382580cf0e29ebdaf907aa1fad475e5e228e84b5d9981ef37a151b2-rootfs.mount: Deactivated successfully. Feb 9 18:40:08.621499 env[1142]: time="2024-02-09T18:40:08.621457729Z" level=info msg="shim disconnected" id=8f057ab36382580cf0e29ebdaf907aa1fad475e5e228e84b5d9981ef37a151b2 Feb 9 18:40:08.621499 env[1142]: time="2024-02-09T18:40:08.621498210Z" level=warning msg="cleaning up after shim disconnected" id=8f057ab36382580cf0e29ebdaf907aa1fad475e5e228e84b5d9981ef37a151b2 namespace=k8s.io Feb 9 18:40:08.621645 env[1142]: time="2024-02-09T18:40:08.621507451Z" level=info msg="cleaning up dead shim" Feb 9 18:40:08.627553 env[1142]: time="2024-02-09T18:40:08.627517373Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:40:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3029 runtime=io.containerd.runc.v2\n" Feb 9 18:40:08.627798 env[1142]: time="2024-02-09T18:40:08.627773140Z" level=info msg="TearDown network for sandbox \"8f057ab36382580cf0e29ebdaf907aa1fad475e5e228e84b5d9981ef37a151b2\" successfully" Feb 9 18:40:08.627827 env[1142]: time="2024-02-09T18:40:08.627799261Z" level=info msg="StopPodSandbox for \"8f057ab36382580cf0e29ebdaf907aa1fad475e5e228e84b5d9981ef37a151b2\" returns successfully" Feb 9 18:40:08.728459 kubelet[1402]: I0209 18:40:08.728348 1402 scope.go:115] "RemoveContainer" containerID="221aac96c0dae6d2ef8d86f849e596fd5ddf789700e40841d043ee86c5680681" Feb 9 18:40:08.729608 env[1142]: time="2024-02-09T18:40:08.729569251Z" level=info msg="RemoveContainer for \"221aac96c0dae6d2ef8d86f849e596fd5ddf789700e40841d043ee86c5680681\"" Feb 9 18:40:08.733205 env[1142]: time="2024-02-09T18:40:08.733171988Z" level=info msg="RemoveContainer for \"221aac96c0dae6d2ef8d86f849e596fd5ddf789700e40841d043ee86c5680681\" returns successfully" Feb 9 18:40:08.733419 kubelet[1402]: I0209 18:40:08.733392 1402 scope.go:115] "RemoveContainer" containerID="0c4280a0336efc0c2cc6fbffca378db809c533d0f9bbac03bd6c9094068e1a9b" Feb 9 18:40:08.734357 env[1142]: time="2024-02-09T18:40:08.734325779Z" level=info msg="RemoveContainer for \"0c4280a0336efc0c2cc6fbffca378db809c533d0f9bbac03bd6c9094068e1a9b\"" Feb 9 18:40:08.736424 env[1142]: time="2024-02-09T18:40:08.736383955Z" level=info msg="RemoveContainer for \"0c4280a0336efc0c2cc6fbffca378db809c533d0f9bbac03bd6c9094068e1a9b\" returns successfully" Feb 9 18:40:08.736600 kubelet[1402]: I0209 18:40:08.736579 1402 scope.go:115] "RemoveContainer" containerID="a3cf4bde0d7f2cae9fc98e204c654e8405d5c1b1b4bd34143fdfbfcbb3a11203" Feb 9 18:40:08.737439 env[1142]: time="2024-02-09T18:40:08.737402742Z" level=info msg="RemoveContainer for \"a3cf4bde0d7f2cae9fc98e204c654e8405d5c1b1b4bd34143fdfbfcbb3a11203\"" Feb 9 18:40:08.739642 env[1142]: time="2024-02-09T18:40:08.739616722Z" level=info msg="RemoveContainer for \"a3cf4bde0d7f2cae9fc98e204c654e8405d5c1b1b4bd34143fdfbfcbb3a11203\" returns successfully" Feb 9 18:40:08.739832 kubelet[1402]: I0209 18:40:08.739810 1402 scope.go:115] "RemoveContainer" containerID="b52661c13a93763bdf40a0f4a286eae2788c78ca05f584ebb86ae99275f051b7" Feb 9 18:40:08.741452 env[1142]: time="2024-02-09T18:40:08.741017040Z" level=info msg="RemoveContainer for \"b52661c13a93763bdf40a0f4a286eae2788c78ca05f584ebb86ae99275f051b7\"" Feb 9 18:40:08.743068 env[1142]: time="2024-02-09T18:40:08.743039655Z" level=info msg="RemoveContainer for \"b52661c13a93763bdf40a0f4a286eae2788c78ca05f584ebb86ae99275f051b7\" returns successfully" Feb 9 18:40:08.743215 kubelet[1402]: I0209 18:40:08.743200 1402 scope.go:115] "RemoveContainer" containerID="d7377efe998f2a0041d5498bc7111cc37a6685fcb2d7ee5a04ec04309811b265" Feb 9 18:40:08.744049 env[1142]: time="2024-02-09T18:40:08.744027481Z" level=info msg="RemoveContainer for \"d7377efe998f2a0041d5498bc7111cc37a6685fcb2d7ee5a04ec04309811b265\"" Feb 9 18:40:08.746205 env[1142]: time="2024-02-09T18:40:08.746176579Z" level=info msg="RemoveContainer for \"d7377efe998f2a0041d5498bc7111cc37a6685fcb2d7ee5a04ec04309811b265\" returns successfully" Feb 9 18:40:08.746472 kubelet[1402]: I0209 18:40:08.746409 1402 scope.go:115] "RemoveContainer" containerID="221aac96c0dae6d2ef8d86f849e596fd5ddf789700e40841d043ee86c5680681" Feb 9 18:40:08.746736 env[1142]: time="2024-02-09T18:40:08.746645752Z" level=error msg="ContainerStatus for \"221aac96c0dae6d2ef8d86f849e596fd5ddf789700e40841d043ee86c5680681\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"221aac96c0dae6d2ef8d86f849e596fd5ddf789700e40841d043ee86c5680681\": not found" Feb 9 18:40:08.746905 kubelet[1402]: E0209 18:40:08.746880 1402 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"221aac96c0dae6d2ef8d86f849e596fd5ddf789700e40841d043ee86c5680681\": not found" containerID="221aac96c0dae6d2ef8d86f849e596fd5ddf789700e40841d043ee86c5680681" Feb 9 18:40:08.746958 kubelet[1402]: I0209 18:40:08.746924 1402 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:221aac96c0dae6d2ef8d86f849e596fd5ddf789700e40841d043ee86c5680681} err="failed to get container status \"221aac96c0dae6d2ef8d86f849e596fd5ddf789700e40841d043ee86c5680681\": rpc error: code = NotFound desc = an error occurred when try to find container \"221aac96c0dae6d2ef8d86f849e596fd5ddf789700e40841d043ee86c5680681\": not found" Feb 9 18:40:08.746958 kubelet[1402]: I0209 18:40:08.746936 1402 scope.go:115] "RemoveContainer" containerID="0c4280a0336efc0c2cc6fbffca378db809c533d0f9bbac03bd6c9094068e1a9b" Feb 9 18:40:08.747129 env[1142]: time="2024-02-09T18:40:08.747081844Z" level=error msg="ContainerStatus for \"0c4280a0336efc0c2cc6fbffca378db809c533d0f9bbac03bd6c9094068e1a9b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0c4280a0336efc0c2cc6fbffca378db809c533d0f9bbac03bd6c9094068e1a9b\": not found" Feb 9 18:40:08.747220 kubelet[1402]: E0209 18:40:08.747207 1402 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0c4280a0336efc0c2cc6fbffca378db809c533d0f9bbac03bd6c9094068e1a9b\": not found" containerID="0c4280a0336efc0c2cc6fbffca378db809c533d0f9bbac03bd6c9094068e1a9b" Feb 9 18:40:08.747262 kubelet[1402]: I0209 18:40:08.747229 1402 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:0c4280a0336efc0c2cc6fbffca378db809c533d0f9bbac03bd6c9094068e1a9b} err="failed to get container status \"0c4280a0336efc0c2cc6fbffca378db809c533d0f9bbac03bd6c9094068e1a9b\": rpc error: code = NotFound desc = an error occurred when try to find container \"0c4280a0336efc0c2cc6fbffca378db809c533d0f9bbac03bd6c9094068e1a9b\": not found" Feb 9 18:40:08.747262 kubelet[1402]: I0209 18:40:08.747239 1402 scope.go:115] "RemoveContainer" containerID="a3cf4bde0d7f2cae9fc98e204c654e8405d5c1b1b4bd34143fdfbfcbb3a11203" Feb 9 18:40:08.747521 env[1142]: time="2024-02-09T18:40:08.747428853Z" level=error msg="ContainerStatus for \"a3cf4bde0d7f2cae9fc98e204c654e8405d5c1b1b4bd34143fdfbfcbb3a11203\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a3cf4bde0d7f2cae9fc98e204c654e8405d5c1b1b4bd34143fdfbfcbb3a11203\": not found" Feb 9 18:40:08.747717 kubelet[1402]: E0209 18:40:08.747702 1402 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a3cf4bde0d7f2cae9fc98e204c654e8405d5c1b1b4bd34143fdfbfcbb3a11203\": not found" containerID="a3cf4bde0d7f2cae9fc98e204c654e8405d5c1b1b4bd34143fdfbfcbb3a11203" Feb 9 18:40:08.747767 kubelet[1402]: I0209 18:40:08.747732 1402 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:a3cf4bde0d7f2cae9fc98e204c654e8405d5c1b1b4bd34143fdfbfcbb3a11203} err="failed to get container status \"a3cf4bde0d7f2cae9fc98e204c654e8405d5c1b1b4bd34143fdfbfcbb3a11203\": rpc error: code = NotFound desc = an error occurred when try to find container \"a3cf4bde0d7f2cae9fc98e204c654e8405d5c1b1b4bd34143fdfbfcbb3a11203\": not found" Feb 9 18:40:08.747767 kubelet[1402]: I0209 18:40:08.747742 1402 scope.go:115] "RemoveContainer" containerID="b52661c13a93763bdf40a0f4a286eae2788c78ca05f584ebb86ae99275f051b7" Feb 9 18:40:08.747967 env[1142]: time="2024-02-09T18:40:08.747920226Z" level=error msg="ContainerStatus for \"b52661c13a93763bdf40a0f4a286eae2788c78ca05f584ebb86ae99275f051b7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b52661c13a93763bdf40a0f4a286eae2788c78ca05f584ebb86ae99275f051b7\": not found" Feb 9 18:40:08.748177 kubelet[1402]: E0209 18:40:08.748097 1402 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b52661c13a93763bdf40a0f4a286eae2788c78ca05f584ebb86ae99275f051b7\": not found" containerID="b52661c13a93763bdf40a0f4a286eae2788c78ca05f584ebb86ae99275f051b7" Feb 9 18:40:08.748177 kubelet[1402]: I0209 18:40:08.748120 1402 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:b52661c13a93763bdf40a0f4a286eae2788c78ca05f584ebb86ae99275f051b7} err="failed to get container status \"b52661c13a93763bdf40a0f4a286eae2788c78ca05f584ebb86ae99275f051b7\": rpc error: code = NotFound desc = an error occurred when try to find container \"b52661c13a93763bdf40a0f4a286eae2788c78ca05f584ebb86ae99275f051b7\": not found" Feb 9 18:40:08.748177 kubelet[1402]: I0209 18:40:08.748130 1402 scope.go:115] "RemoveContainer" containerID="d7377efe998f2a0041d5498bc7111cc37a6685fcb2d7ee5a04ec04309811b265" Feb 9 18:40:08.748318 env[1142]: time="2024-02-09T18:40:08.748273356Z" level=error msg="ContainerStatus for \"d7377efe998f2a0041d5498bc7111cc37a6685fcb2d7ee5a04ec04309811b265\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d7377efe998f2a0041d5498bc7111cc37a6685fcb2d7ee5a04ec04309811b265\": not found" Feb 9 18:40:08.748461 kubelet[1402]: E0209 18:40:08.748446 1402 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d7377efe998f2a0041d5498bc7111cc37a6685fcb2d7ee5a04ec04309811b265\": not found" containerID="d7377efe998f2a0041d5498bc7111cc37a6685fcb2d7ee5a04ec04309811b265" Feb 9 18:40:08.748510 kubelet[1402]: I0209 18:40:08.748476 1402 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:d7377efe998f2a0041d5498bc7111cc37a6685fcb2d7ee5a04ec04309811b265} err="failed to get container status \"d7377efe998f2a0041d5498bc7111cc37a6685fcb2d7ee5a04ec04309811b265\": rpc error: code = NotFound desc = an error occurred when try to find container \"d7377efe998f2a0041d5498bc7111cc37a6685fcb2d7ee5a04ec04309811b265\": not found" Feb 9 18:40:08.770670 kubelet[1402]: I0209 18:40:08.770639 1402 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-etc-cni-netd\") pod \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\" (UID: \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\") " Feb 9 18:40:08.770752 kubelet[1402]: I0209 18:40:08.770684 1402 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-lib-modules\") pod \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\" (UID: \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\") " Feb 9 18:40:08.770752 kubelet[1402]: I0209 18:40:08.770709 1402 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lp87q\" (UniqueName: \"kubernetes.io/projected/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-kube-api-access-lp87q\") pod \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\" (UID: \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\") " Feb 9 18:40:08.770752 kubelet[1402]: I0209 18:40:08.770726 1402 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-host-proc-sys-kernel\") pod \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\" (UID: \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\") " Feb 9 18:40:08.770752 kubelet[1402]: I0209 18:40:08.770743 1402 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-bpf-maps\") pod \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\" (UID: \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\") " Feb 9 18:40:08.770847 kubelet[1402]: I0209 18:40:08.770759 1402 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-cilium-cgroup\") pod \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\" (UID: \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\") " Feb 9 18:40:08.770847 kubelet[1402]: I0209 18:40:08.770776 1402 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-host-proc-sys-net\") pod \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\" (UID: \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\") " Feb 9 18:40:08.770847 kubelet[1402]: I0209 18:40:08.770796 1402 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-hubble-tls\") pod \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\" (UID: \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\") " Feb 9 18:40:08.770847 kubelet[1402]: I0209 18:40:08.770817 1402 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-cilium-config-path\") pod \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\" (UID: \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\") " Feb 9 18:40:08.770847 kubelet[1402]: I0209 18:40:08.770833 1402 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-cni-path\") pod \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\" (UID: \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\") " Feb 9 18:40:08.770847 kubelet[1402]: I0209 18:40:08.770849 1402 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-hostproc\") pod \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\" (UID: \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\") " Feb 9 18:40:08.770975 kubelet[1402]: I0209 18:40:08.770865 1402 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-xtables-lock\") pod \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\" (UID: \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\") " Feb 9 18:40:08.770975 kubelet[1402]: I0209 18:40:08.770883 1402 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-cilium-run\") pod \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\" (UID: \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\") " Feb 9 18:40:08.770975 kubelet[1402]: I0209 18:40:08.770902 1402 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-clustermesh-secrets\") pod \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\" (UID: \"f6f2869a-c5f3-4fd4-9da1-14ab9e17af72\") " Feb 9 18:40:08.773266 kubelet[1402]: I0209 18:40:08.771100 1402 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f6f2869a-c5f3-4fd4-9da1-14ab9e17af72" (UID: "f6f2869a-c5f3-4fd4-9da1-14ab9e17af72"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:08.773266 kubelet[1402]: I0209 18:40:08.771102 1402 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-cni-path" (OuterVolumeSpecName: "cni-path") pod "f6f2869a-c5f3-4fd4-9da1-14ab9e17af72" (UID: "f6f2869a-c5f3-4fd4-9da1-14ab9e17af72"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:08.773266 kubelet[1402]: I0209 18:40:08.771152 1402 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-hostproc" (OuterVolumeSpecName: "hostproc") pod "f6f2869a-c5f3-4fd4-9da1-14ab9e17af72" (UID: "f6f2869a-c5f3-4fd4-9da1-14ab9e17af72"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:08.773266 kubelet[1402]: I0209 18:40:08.771337 1402 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f6f2869a-c5f3-4fd4-9da1-14ab9e17af72" (UID: "f6f2869a-c5f3-4fd4-9da1-14ab9e17af72"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:08.773266 kubelet[1402]: I0209 18:40:08.771358 1402 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f6f2869a-c5f3-4fd4-9da1-14ab9e17af72" (UID: "f6f2869a-c5f3-4fd4-9da1-14ab9e17af72"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:08.773475 kubelet[1402]: I0209 18:40:08.771446 1402 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f6f2869a-c5f3-4fd4-9da1-14ab9e17af72" (UID: "f6f2869a-c5f3-4fd4-9da1-14ab9e17af72"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:08.773475 kubelet[1402]: W0209 18:40:08.771478 1402 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 18:40:08.773475 kubelet[1402]: I0209 18:40:08.773159 1402 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f6f2869a-c5f3-4fd4-9da1-14ab9e17af72" (UID: "f6f2869a-c5f3-4fd4-9da1-14ab9e17af72"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 18:40:08.773475 kubelet[1402]: I0209 18:40:08.773207 1402 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f6f2869a-c5f3-4fd4-9da1-14ab9e17af72" (UID: "f6f2869a-c5f3-4fd4-9da1-14ab9e17af72"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:08.773475 kubelet[1402]: I0209 18:40:08.773224 1402 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f6f2869a-c5f3-4fd4-9da1-14ab9e17af72" (UID: "f6f2869a-c5f3-4fd4-9da1-14ab9e17af72"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:08.773593 kubelet[1402]: I0209 18:40:08.773240 1402 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f6f2869a-c5f3-4fd4-9da1-14ab9e17af72" (UID: "f6f2869a-c5f3-4fd4-9da1-14ab9e17af72"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:08.773593 kubelet[1402]: I0209 18:40:08.773253 1402 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f6f2869a-c5f3-4fd4-9da1-14ab9e17af72" (UID: "f6f2869a-c5f3-4fd4-9da1-14ab9e17af72"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:08.773795 kubelet[1402]: I0209 18:40:08.773751 1402 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-kube-api-access-lp87q" (OuterVolumeSpecName: "kube-api-access-lp87q") pod "f6f2869a-c5f3-4fd4-9da1-14ab9e17af72" (UID: "f6f2869a-c5f3-4fd4-9da1-14ab9e17af72"). InnerVolumeSpecName "kube-api-access-lp87q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:40:08.775087 kubelet[1402]: I0209 18:40:08.775053 1402 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f6f2869a-c5f3-4fd4-9da1-14ab9e17af72" (UID: "f6f2869a-c5f3-4fd4-9da1-14ab9e17af72"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 18:40:08.775156 kubelet[1402]: I0209 18:40:08.775141 1402 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f6f2869a-c5f3-4fd4-9da1-14ab9e17af72" (UID: "f6f2869a-c5f3-4fd4-9da1-14ab9e17af72"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:40:08.775513 systemd[1]: var-lib-kubelet-pods-f6f2869a\x2dc5f3\x2d4fd4\x2d9da1\x2d14ab9e17af72-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlp87q.mount: Deactivated successfully. Feb 9 18:40:08.775610 systemd[1]: var-lib-kubelet-pods-f6f2869a\x2dc5f3\x2d4fd4\x2d9da1\x2d14ab9e17af72-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 18:40:08.775662 systemd[1]: var-lib-kubelet-pods-f6f2869a\x2dc5f3\x2d4fd4\x2d9da1\x2d14ab9e17af72-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 18:40:08.871403 kubelet[1402]: I0209 18:40:08.871376 1402 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-clustermesh-secrets\") on node \"10.0.0.109\" DevicePath \"\"" Feb 9 18:40:08.871403 kubelet[1402]: I0209 18:40:08.871399 1402 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-bpf-maps\") on node \"10.0.0.109\" DevicePath \"\"" Feb 9 18:40:08.871520 kubelet[1402]: I0209 18:40:08.871430 1402 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-cilium-cgroup\") on node \"10.0.0.109\" DevicePath \"\"" Feb 9 18:40:08.871520 kubelet[1402]: I0209 18:40:08.871446 1402 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-etc-cni-netd\") on node \"10.0.0.109\" DevicePath \"\"" Feb 9 18:40:08.871520 kubelet[1402]: I0209 18:40:08.871456 1402 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-lib-modules\") on node \"10.0.0.109\" DevicePath \"\"" Feb 9 18:40:08.871520 kubelet[1402]: I0209 18:40:08.871466 1402 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-lp87q\" (UniqueName: \"kubernetes.io/projected/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-kube-api-access-lp87q\") on node \"10.0.0.109\" DevicePath \"\"" Feb 9 18:40:08.871520 kubelet[1402]: I0209 18:40:08.871478 1402 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-host-proc-sys-kernel\") on node \"10.0.0.109\" DevicePath \"\"" Feb 9 18:40:08.871520 kubelet[1402]: I0209 18:40:08.871488 1402 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-cilium-config-path\") on node \"10.0.0.109\" DevicePath \"\"" Feb 9 18:40:08.871520 kubelet[1402]: I0209 18:40:08.871497 1402 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-host-proc-sys-net\") on node \"10.0.0.109\" DevicePath \"\"" Feb 9 18:40:08.871520 kubelet[1402]: I0209 18:40:08.871506 1402 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-hubble-tls\") on node \"10.0.0.109\" DevicePath \"\"" Feb 9 18:40:08.871704 kubelet[1402]: I0209 18:40:08.871515 1402 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-cilium-run\") on node \"10.0.0.109\" DevicePath \"\"" Feb 9 18:40:08.871704 kubelet[1402]: I0209 18:40:08.871523 1402 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-cni-path\") on node \"10.0.0.109\" DevicePath \"\"" Feb 9 18:40:08.871704 kubelet[1402]: I0209 18:40:08.871533 1402 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-hostproc\") on node \"10.0.0.109\" DevicePath \"\"" Feb 9 18:40:08.871704 kubelet[1402]: I0209 18:40:08.871542 1402 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72-xtables-lock\") on node \"10.0.0.109\" DevicePath \"\"" Feb 9 18:40:09.032231 systemd[1]: Removed slice kubepods-burstable-podf6f2869a_c5f3_4fd4_9da1_14ab9e17af72.slice. Feb 9 18:40:09.032309 systemd[1]: kubepods-burstable-podf6f2869a_c5f3_4fd4_9da1_14ab9e17af72.slice: Consumed 6.684s CPU time. Feb 9 18:40:09.488235 kubelet[1402]: E0209 18:40:09.488193 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:09.503837 kubelet[1402]: E0209 18:40:09.503801 1402 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 18:40:10.489014 kubelet[1402]: E0209 18:40:10.488955 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:10.615305 env[1142]: time="2024-02-09T18:40:10.615260048Z" level=info msg="StopPodSandbox for \"8f057ab36382580cf0e29ebdaf907aa1fad475e5e228e84b5d9981ef37a151b2\"" Feb 9 18:40:10.615601 env[1142]: time="2024-02-09T18:40:10.615353610Z" level=info msg="TearDown network for sandbox \"8f057ab36382580cf0e29ebdaf907aa1fad475e5e228e84b5d9981ef37a151b2\" successfully" Feb 9 18:40:10.615601 env[1142]: time="2024-02-09T18:40:10.615387411Z" level=info msg="StopPodSandbox for \"8f057ab36382580cf0e29ebdaf907aa1fad475e5e228e84b5d9981ef37a151b2\" returns successfully" Feb 9 18:40:10.616880 kubelet[1402]: I0209 18:40:10.616814 1402 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=f6f2869a-c5f3-4fd4-9da1-14ab9e17af72 path="/var/lib/kubelet/pods/f6f2869a-c5f3-4fd4-9da1-14ab9e17af72/volumes" Feb 9 18:40:11.489859 kubelet[1402]: E0209 18:40:11.489817 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:11.789899 kubelet[1402]: I0209 18:40:11.789589 1402 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:40:11.789899 kubelet[1402]: E0209 18:40:11.789641 1402 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f6f2869a-c5f3-4fd4-9da1-14ab9e17af72" containerName="mount-bpf-fs" Feb 9 18:40:11.789899 kubelet[1402]: E0209 18:40:11.789651 1402 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f6f2869a-c5f3-4fd4-9da1-14ab9e17af72" containerName="mount-cgroup" Feb 9 18:40:11.789899 kubelet[1402]: E0209 18:40:11.789658 1402 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f6f2869a-c5f3-4fd4-9da1-14ab9e17af72" containerName="apply-sysctl-overwrites" Feb 9 18:40:11.789899 kubelet[1402]: E0209 18:40:11.789665 1402 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f6f2869a-c5f3-4fd4-9da1-14ab9e17af72" containerName="clean-cilium-state" Feb 9 18:40:11.789899 kubelet[1402]: E0209 18:40:11.789671 1402 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f6f2869a-c5f3-4fd4-9da1-14ab9e17af72" containerName="cilium-agent" Feb 9 18:40:11.789899 kubelet[1402]: I0209 18:40:11.789688 1402 memory_manager.go:346] "RemoveStaleState removing state" podUID="f6f2869a-c5f3-4fd4-9da1-14ab9e17af72" containerName="cilium-agent" Feb 9 18:40:11.794135 systemd[1]: Created slice kubepods-besteffort-pod062d8e24_9934_4953_981b_1b673cda56f8.slice. Feb 9 18:40:11.815628 kubelet[1402]: I0209 18:40:11.815587 1402 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:40:11.820071 systemd[1]: Created slice kubepods-burstable-pod92a00ba7_eb51_4b3b_a850_83a114f07f2b.slice. Feb 9 18:40:11.885236 kubelet[1402]: I0209 18:40:11.885203 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/92a00ba7-eb51-4b3b-a850-83a114f07f2b-cni-path\") pod \"cilium-mgjnn\" (UID: \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\") " pod="kube-system/cilium-mgjnn" Feb 9 18:40:11.885483 kubelet[1402]: I0209 18:40:11.885455 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92a00ba7-eb51-4b3b-a850-83a114f07f2b-cilium-config-path\") pod \"cilium-mgjnn\" (UID: \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\") " pod="kube-system/cilium-mgjnn" Feb 9 18:40:11.885567 kubelet[1402]: I0209 18:40:11.885496 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/92a00ba7-eb51-4b3b-a850-83a114f07f2b-host-proc-sys-kernel\") pod \"cilium-mgjnn\" (UID: \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\") " pod="kube-system/cilium-mgjnn" Feb 9 18:40:11.885567 kubelet[1402]: I0209 18:40:11.885536 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/92a00ba7-eb51-4b3b-a850-83a114f07f2b-clustermesh-secrets\") pod \"cilium-mgjnn\" (UID: \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\") " pod="kube-system/cilium-mgjnn" Feb 9 18:40:11.885567 kubelet[1402]: I0209 18:40:11.885559 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/92a00ba7-eb51-4b3b-a850-83a114f07f2b-cilium-ipsec-secrets\") pod \"cilium-mgjnn\" (UID: \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\") " pod="kube-system/cilium-mgjnn" Feb 9 18:40:11.885653 kubelet[1402]: I0209 18:40:11.885584 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmrz4\" (UniqueName: \"kubernetes.io/projected/062d8e24-9934-4953-981b-1b673cda56f8-kube-api-access-kmrz4\") pod \"cilium-operator-f59cbd8c6-q7jzb\" (UID: \"062d8e24-9934-4953-981b-1b673cda56f8\") " pod="kube-system/cilium-operator-f59cbd8c6-q7jzb" Feb 9 18:40:11.885653 kubelet[1402]: I0209 18:40:11.885612 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/92a00ba7-eb51-4b3b-a850-83a114f07f2b-hostproc\") pod \"cilium-mgjnn\" (UID: \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\") " pod="kube-system/cilium-mgjnn" Feb 9 18:40:11.885653 kubelet[1402]: I0209 18:40:11.885632 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/92a00ba7-eb51-4b3b-a850-83a114f07f2b-cilium-cgroup\") pod \"cilium-mgjnn\" (UID: \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\") " pod="kube-system/cilium-mgjnn" Feb 9 18:40:11.885653 kubelet[1402]: I0209 18:40:11.885654 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/92a00ba7-eb51-4b3b-a850-83a114f07f2b-host-proc-sys-net\") pod \"cilium-mgjnn\" (UID: \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\") " pod="kube-system/cilium-mgjnn" Feb 9 18:40:11.885740 kubelet[1402]: I0209 18:40:11.885674 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/92a00ba7-eb51-4b3b-a850-83a114f07f2b-hubble-tls\") pod \"cilium-mgjnn\" (UID: \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\") " pod="kube-system/cilium-mgjnn" Feb 9 18:40:11.885740 kubelet[1402]: I0209 18:40:11.885693 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jkk8\" (UniqueName: \"kubernetes.io/projected/92a00ba7-eb51-4b3b-a850-83a114f07f2b-kube-api-access-6jkk8\") pod \"cilium-mgjnn\" (UID: \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\") " pod="kube-system/cilium-mgjnn" Feb 9 18:40:11.885740 kubelet[1402]: I0209 18:40:11.885712 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92a00ba7-eb51-4b3b-a850-83a114f07f2b-xtables-lock\") pod \"cilium-mgjnn\" (UID: \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\") " pod="kube-system/cilium-mgjnn" Feb 9 18:40:11.885740 kubelet[1402]: I0209 18:40:11.885733 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/92a00ba7-eb51-4b3b-a850-83a114f07f2b-cilium-run\") pod \"cilium-mgjnn\" (UID: \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\") " pod="kube-system/cilium-mgjnn" Feb 9 18:40:11.885826 kubelet[1402]: I0209 18:40:11.885751 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/92a00ba7-eb51-4b3b-a850-83a114f07f2b-bpf-maps\") pod \"cilium-mgjnn\" (UID: \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\") " pod="kube-system/cilium-mgjnn" Feb 9 18:40:11.885826 kubelet[1402]: I0209 18:40:11.885770 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/92a00ba7-eb51-4b3b-a850-83a114f07f2b-etc-cni-netd\") pod \"cilium-mgjnn\" (UID: \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\") " pod="kube-system/cilium-mgjnn" Feb 9 18:40:11.885826 kubelet[1402]: I0209 18:40:11.885789 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92a00ba7-eb51-4b3b-a850-83a114f07f2b-lib-modules\") pod \"cilium-mgjnn\" (UID: \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\") " pod="kube-system/cilium-mgjnn" Feb 9 18:40:11.885826 kubelet[1402]: I0209 18:40:11.885815 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/062d8e24-9934-4953-981b-1b673cda56f8-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-q7jzb\" (UID: \"062d8e24-9934-4953-981b-1b673cda56f8\") " pod="kube-system/cilium-operator-f59cbd8c6-q7jzb" Feb 9 18:40:12.096934 kubelet[1402]: E0209 18:40:12.096897 1402 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:12.097704 env[1142]: time="2024-02-09T18:40:12.097644204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-q7jzb,Uid:062d8e24-9934-4953-981b-1b673cda56f8,Namespace:kube-system,Attempt:0,}" Feb 9 18:40:12.110418 env[1142]: time="2024-02-09T18:40:12.110347916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:40:12.110537 env[1142]: time="2024-02-09T18:40:12.110429038Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:40:12.110537 env[1142]: time="2024-02-09T18:40:12.110457479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:40:12.110718 env[1142]: time="2024-02-09T18:40:12.110627363Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/51c9e4f0b3ee9122c25bd60734ba28dabbd19cabd28b81648c1a0369ceb38c61 pid=3058 runtime=io.containerd.runc.v2 Feb 9 18:40:12.120901 systemd[1]: Started cri-containerd-51c9e4f0b3ee9122c25bd60734ba28dabbd19cabd28b81648c1a0369ceb38c61.scope. Feb 9 18:40:12.130718 kubelet[1402]: E0209 18:40:12.130684 1402 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:12.131400 env[1142]: time="2024-02-09T18:40:12.131355753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mgjnn,Uid:92a00ba7-eb51-4b3b-a850-83a114f07f2b,Namespace:kube-system,Attempt:0,}" Feb 9 18:40:12.151260 env[1142]: time="2024-02-09T18:40:12.151196962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:40:12.151432 env[1142]: time="2024-02-09T18:40:12.151237523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:40:12.151432 env[1142]: time="2024-02-09T18:40:12.151272844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:40:12.151522 env[1142]: time="2024-02-09T18:40:12.151448648Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5d3700be6ce6e755a44d899e52b1d31f74ba7f23cb22169408789db3606a6ad2 pid=3093 runtime=io.containerd.runc.v2 Feb 9 18:40:12.163380 systemd[1]: Started cri-containerd-5d3700be6ce6e755a44d899e52b1d31f74ba7f23cb22169408789db3606a6ad2.scope. Feb 9 18:40:12.172866 env[1142]: time="2024-02-09T18:40:12.172821894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-q7jzb,Uid:062d8e24-9934-4953-981b-1b673cda56f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"51c9e4f0b3ee9122c25bd60734ba28dabbd19cabd28b81648c1a0369ceb38c61\"" Feb 9 18:40:12.177613 kubelet[1402]: E0209 18:40:12.177115 1402 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:12.178572 env[1142]: time="2024-02-09T18:40:12.178538955Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 18:40:12.196102 env[1142]: time="2024-02-09T18:40:12.196060706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mgjnn,Uid:92a00ba7-eb51-4b3b-a850-83a114f07f2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d3700be6ce6e755a44d899e52b1d31f74ba7f23cb22169408789db3606a6ad2\"" Feb 9 18:40:12.197325 kubelet[1402]: E0209 18:40:12.196872 1402 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:12.198677 env[1142]: time="2024-02-09T18:40:12.198642489Z" level=info msg="CreateContainer within sandbox \"5d3700be6ce6e755a44d899e52b1d31f74ba7f23cb22169408789db3606a6ad2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 18:40:12.208440 env[1142]: time="2024-02-09T18:40:12.208371689Z" level=info msg="CreateContainer within sandbox \"5d3700be6ce6e755a44d899e52b1d31f74ba7f23cb22169408789db3606a6ad2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4a2bd7691948eecf630c83a253f0606f208dda9e42e7a8a3b5ba4ee052a8bde4\"" Feb 9 18:40:12.208991 env[1142]: time="2024-02-09T18:40:12.208958063Z" level=info msg="StartContainer for \"4a2bd7691948eecf630c83a253f0606f208dda9e42e7a8a3b5ba4ee052a8bde4\"" Feb 9 18:40:12.222940 systemd[1]: Started cri-containerd-4a2bd7691948eecf630c83a253f0606f208dda9e42e7a8a3b5ba4ee052a8bde4.scope. Feb 9 18:40:12.280341 systemd[1]: cri-containerd-4a2bd7691948eecf630c83a253f0606f208dda9e42e7a8a3b5ba4ee052a8bde4.scope: Deactivated successfully. Feb 9 18:40:12.296615 env[1142]: time="2024-02-09T18:40:12.296559259Z" level=info msg="shim disconnected" id=4a2bd7691948eecf630c83a253f0606f208dda9e42e7a8a3b5ba4ee052a8bde4 Feb 9 18:40:12.296615 env[1142]: time="2024-02-09T18:40:12.296607820Z" level=warning msg="cleaning up after shim disconnected" id=4a2bd7691948eecf630c83a253f0606f208dda9e42e7a8a3b5ba4ee052a8bde4 namespace=k8s.io Feb 9 18:40:12.296615 env[1142]: time="2024-02-09T18:40:12.296617020Z" level=info msg="cleaning up dead shim" Feb 9 18:40:12.303904 env[1142]: time="2024-02-09T18:40:12.303865599Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:40:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3158 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T18:40:12Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/4a2bd7691948eecf630c83a253f0606f208dda9e42e7a8a3b5ba4ee052a8bde4/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 18:40:12.304189 env[1142]: time="2024-02-09T18:40:12.304103045Z" level=error msg="copy shim log" error="read /proc/self/fd/57: file already closed" Feb 9 18:40:12.307505 env[1142]: time="2024-02-09T18:40:12.307465727Z" level=error msg="Failed to pipe stderr of container \"4a2bd7691948eecf630c83a253f0606f208dda9e42e7a8a3b5ba4ee052a8bde4\"" error="reading from a closed fifo" Feb 9 18:40:12.307657 env[1142]: time="2024-02-09T18:40:12.307485688Z" level=error msg="Failed to pipe stdout of container \"4a2bd7691948eecf630c83a253f0606f208dda9e42e7a8a3b5ba4ee052a8bde4\"" error="reading from a closed fifo" Feb 9 18:40:12.309370 env[1142]: time="2024-02-09T18:40:12.309308293Z" level=error msg="StartContainer for \"4a2bd7691948eecf630c83a253f0606f208dda9e42e7a8a3b5ba4ee052a8bde4\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 18:40:12.309856 kubelet[1402]: E0209 18:40:12.309627 1402 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="4a2bd7691948eecf630c83a253f0606f208dda9e42e7a8a3b5ba4ee052a8bde4" Feb 9 18:40:12.309856 kubelet[1402]: E0209 18:40:12.309782 1402 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 18:40:12.309856 kubelet[1402]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 18:40:12.309856 kubelet[1402]: rm /hostbin/cilium-mount Feb 9 18:40:12.310029 kubelet[1402]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-6jkk8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-mgjnn_kube-system(92a00ba7-eb51-4b3b-a850-83a114f07f2b): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 18:40:12.310102 kubelet[1402]: E0209 18:40:12.309830 1402 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-mgjnn" podUID=92a00ba7-eb51-4b3b-a850-83a114f07f2b Feb 9 18:40:12.490476 kubelet[1402]: E0209 18:40:12.490365 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:12.738836 env[1142]: time="2024-02-09T18:40:12.738797982Z" level=info msg="StopPodSandbox for \"5d3700be6ce6e755a44d899e52b1d31f74ba7f23cb22169408789db3606a6ad2\"" Feb 9 18:40:12.738975 env[1142]: time="2024-02-09T18:40:12.738856384Z" level=info msg="Container to stop \"4a2bd7691948eecf630c83a253f0606f208dda9e42e7a8a3b5ba4ee052a8bde4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:40:12.744208 systemd[1]: cri-containerd-5d3700be6ce6e755a44d899e52b1d31f74ba7f23cb22169408789db3606a6ad2.scope: Deactivated successfully. Feb 9 18:40:12.762920 env[1142]: time="2024-02-09T18:40:12.762874975Z" level=info msg="shim disconnected" id=5d3700be6ce6e755a44d899e52b1d31f74ba7f23cb22169408789db3606a6ad2 Feb 9 18:40:12.762920 env[1142]: time="2024-02-09T18:40:12.762919896Z" level=warning msg="cleaning up after shim disconnected" id=5d3700be6ce6e755a44d899e52b1d31f74ba7f23cb22169408789db3606a6ad2 namespace=k8s.io Feb 9 18:40:12.763102 env[1142]: time="2024-02-09T18:40:12.762928736Z" level=info msg="cleaning up dead shim" Feb 9 18:40:12.769476 env[1142]: time="2024-02-09T18:40:12.769419216Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:40:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3190 runtime=io.containerd.runc.v2\n" Feb 9 18:40:12.769755 env[1142]: time="2024-02-09T18:40:12.769713383Z" level=info msg="TearDown network for sandbox \"5d3700be6ce6e755a44d899e52b1d31f74ba7f23cb22169408789db3606a6ad2\" successfully" Feb 9 18:40:12.769755 env[1142]: time="2024-02-09T18:40:12.769748504Z" level=info msg="StopPodSandbox for \"5d3700be6ce6e755a44d899e52b1d31f74ba7f23cb22169408789db3606a6ad2\" returns successfully" Feb 9 18:40:12.891215 kubelet[1402]: I0209 18:40:12.891175 1402 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/92a00ba7-eb51-4b3b-a850-83a114f07f2b-host-proc-sys-net\") pod \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\" (UID: \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\") " Feb 9 18:40:12.891215 kubelet[1402]: I0209 18:40:12.891224 1402 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/92a00ba7-eb51-4b3b-a850-83a114f07f2b-etc-cni-netd\") pod \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\" (UID: \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\") " Feb 9 18:40:12.891403 kubelet[1402]: I0209 18:40:12.891243 1402 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/92a00ba7-eb51-4b3b-a850-83a114f07f2b-cni-path\") pod \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\" (UID: \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\") " Feb 9 18:40:12.891403 kubelet[1402]: I0209 18:40:12.891267 1402 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/92a00ba7-eb51-4b3b-a850-83a114f07f2b-clustermesh-secrets\") pod \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\" (UID: \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\") " Feb 9 18:40:12.891403 kubelet[1402]: I0209 18:40:12.891285 1402 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/92a00ba7-eb51-4b3b-a850-83a114f07f2b-hostproc\") pod \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\" (UID: \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\") " Feb 9 18:40:12.891403 kubelet[1402]: I0209 18:40:12.891310 1402 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jkk8\" (UniqueName: \"kubernetes.io/projected/92a00ba7-eb51-4b3b-a850-83a114f07f2b-kube-api-access-6jkk8\") pod \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\" (UID: \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\") " Feb 9 18:40:12.891403 kubelet[1402]: I0209 18:40:12.891328 1402 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/92a00ba7-eb51-4b3b-a850-83a114f07f2b-bpf-maps\") pod \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\" (UID: \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\") " Feb 9 18:40:12.891403 kubelet[1402]: I0209 18:40:12.891347 1402 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/92a00ba7-eb51-4b3b-a850-83a114f07f2b-host-proc-sys-kernel\") pod \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\" (UID: \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\") " Feb 9 18:40:12.891582 kubelet[1402]: I0209 18:40:12.891379 1402 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/92a00ba7-eb51-4b3b-a850-83a114f07f2b-cilium-ipsec-secrets\") pod \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\" (UID: \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\") " Feb 9 18:40:12.891582 kubelet[1402]: I0209 18:40:12.891401 1402 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/92a00ba7-eb51-4b3b-a850-83a114f07f2b-hubble-tls\") pod \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\" (UID: \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\") " Feb 9 18:40:12.891582 kubelet[1402]: I0209 18:40:12.891429 1402 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92a00ba7-eb51-4b3b-a850-83a114f07f2b-xtables-lock\") pod \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\" (UID: \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\") " Feb 9 18:40:12.891582 kubelet[1402]: I0209 18:40:12.891451 1402 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92a00ba7-eb51-4b3b-a850-83a114f07f2b-cilium-config-path\") pod \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\" (UID: \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\") " Feb 9 18:40:12.891582 kubelet[1402]: I0209 18:40:12.891468 1402 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/92a00ba7-eb51-4b3b-a850-83a114f07f2b-cilium-cgroup\") pod \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\" (UID: \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\") " Feb 9 18:40:12.891582 kubelet[1402]: I0209 18:40:12.891487 1402 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92a00ba7-eb51-4b3b-a850-83a114f07f2b-lib-modules\") pod \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\" (UID: \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\") " Feb 9 18:40:12.891736 kubelet[1402]: I0209 18:40:12.891503 1402 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/92a00ba7-eb51-4b3b-a850-83a114f07f2b-cilium-run\") pod \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\" (UID: \"92a00ba7-eb51-4b3b-a850-83a114f07f2b\") " Feb 9 18:40:12.891736 kubelet[1402]: I0209 18:40:12.891570 1402 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92a00ba7-eb51-4b3b-a850-83a114f07f2b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "92a00ba7-eb51-4b3b-a850-83a114f07f2b" (UID: "92a00ba7-eb51-4b3b-a850-83a114f07f2b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:12.891736 kubelet[1402]: I0209 18:40:12.891599 1402 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92a00ba7-eb51-4b3b-a850-83a114f07f2b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "92a00ba7-eb51-4b3b-a850-83a114f07f2b" (UID: "92a00ba7-eb51-4b3b-a850-83a114f07f2b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:12.891736 kubelet[1402]: I0209 18:40:12.891615 1402 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92a00ba7-eb51-4b3b-a850-83a114f07f2b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "92a00ba7-eb51-4b3b-a850-83a114f07f2b" (UID: "92a00ba7-eb51-4b3b-a850-83a114f07f2b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:12.891736 kubelet[1402]: I0209 18:40:12.891629 1402 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92a00ba7-eb51-4b3b-a850-83a114f07f2b-cni-path" (OuterVolumeSpecName: "cni-path") pod "92a00ba7-eb51-4b3b-a850-83a114f07f2b" (UID: "92a00ba7-eb51-4b3b-a850-83a114f07f2b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:12.893254 kubelet[1402]: I0209 18:40:12.892143 1402 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92a00ba7-eb51-4b3b-a850-83a114f07f2b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "92a00ba7-eb51-4b3b-a850-83a114f07f2b" (UID: "92a00ba7-eb51-4b3b-a850-83a114f07f2b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:12.893254 kubelet[1402]: W0209 18:40:12.892136 1402 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/92a00ba7-eb51-4b3b-a850-83a114f07f2b/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 18:40:12.893254 kubelet[1402]: I0209 18:40:12.892168 1402 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92a00ba7-eb51-4b3b-a850-83a114f07f2b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "92a00ba7-eb51-4b3b-a850-83a114f07f2b" (UID: "92a00ba7-eb51-4b3b-a850-83a114f07f2b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:12.893254 kubelet[1402]: I0209 18:40:12.892330 1402 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92a00ba7-eb51-4b3b-a850-83a114f07f2b-hostproc" (OuterVolumeSpecName: "hostproc") pod "92a00ba7-eb51-4b3b-a850-83a114f07f2b" (UID: "92a00ba7-eb51-4b3b-a850-83a114f07f2b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:12.893254 kubelet[1402]: I0209 18:40:12.892352 1402 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92a00ba7-eb51-4b3b-a850-83a114f07f2b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "92a00ba7-eb51-4b3b-a850-83a114f07f2b" (UID: "92a00ba7-eb51-4b3b-a850-83a114f07f2b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:12.893456 kubelet[1402]: I0209 18:40:12.892364 1402 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92a00ba7-eb51-4b3b-a850-83a114f07f2b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "92a00ba7-eb51-4b3b-a850-83a114f07f2b" (UID: "92a00ba7-eb51-4b3b-a850-83a114f07f2b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:12.893456 kubelet[1402]: I0209 18:40:12.892370 1402 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92a00ba7-eb51-4b3b-a850-83a114f07f2b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "92a00ba7-eb51-4b3b-a850-83a114f07f2b" (UID: "92a00ba7-eb51-4b3b-a850-83a114f07f2b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:12.894013 kubelet[1402]: I0209 18:40:12.893982 1402 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92a00ba7-eb51-4b3b-a850-83a114f07f2b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "92a00ba7-eb51-4b3b-a850-83a114f07f2b" (UID: "92a00ba7-eb51-4b3b-a850-83a114f07f2b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 18:40:12.894915 kubelet[1402]: I0209 18:40:12.894885 1402 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92a00ba7-eb51-4b3b-a850-83a114f07f2b-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "92a00ba7-eb51-4b3b-a850-83a114f07f2b" (UID: "92a00ba7-eb51-4b3b-a850-83a114f07f2b"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 18:40:12.895601 kubelet[1402]: I0209 18:40:12.895579 1402 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92a00ba7-eb51-4b3b-a850-83a114f07f2b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "92a00ba7-eb51-4b3b-a850-83a114f07f2b" (UID: "92a00ba7-eb51-4b3b-a850-83a114f07f2b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:40:12.895924 kubelet[1402]: I0209 18:40:12.895898 1402 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92a00ba7-eb51-4b3b-a850-83a114f07f2b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "92a00ba7-eb51-4b3b-a850-83a114f07f2b" (UID: "92a00ba7-eb51-4b3b-a850-83a114f07f2b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 18:40:12.897082 kubelet[1402]: I0209 18:40:12.897060 1402 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92a00ba7-eb51-4b3b-a850-83a114f07f2b-kube-api-access-6jkk8" (OuterVolumeSpecName: "kube-api-access-6jkk8") pod "92a00ba7-eb51-4b3b-a850-83a114f07f2b" (UID: "92a00ba7-eb51-4b3b-a850-83a114f07f2b"). InnerVolumeSpecName "kube-api-access-6jkk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:40:12.992014 systemd[1]: var-lib-kubelet-pods-92a00ba7\x2deb51\x2d4b3b\x2da850\x2d83a114f07f2b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6jkk8.mount: Deactivated successfully. Feb 9 18:40:12.992100 systemd[1]: var-lib-kubelet-pods-92a00ba7\x2deb51\x2d4b3b\x2da850\x2d83a114f07f2b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 18:40:12.992156 systemd[1]: var-lib-kubelet-pods-92a00ba7\x2deb51\x2d4b3b\x2da850\x2d83a114f07f2b-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 18:40:12.992207 systemd[1]: var-lib-kubelet-pods-92a00ba7\x2deb51\x2d4b3b\x2da850\x2d83a114f07f2b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 18:40:12.994439 kubelet[1402]: I0209 18:40:12.994345 1402 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/92a00ba7-eb51-4b3b-a850-83a114f07f2b-hubble-tls\") on node \"10.0.0.109\" DevicePath \"\"" Feb 9 18:40:12.994439 kubelet[1402]: I0209 18:40:12.994376 1402 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/92a00ba7-eb51-4b3b-a850-83a114f07f2b-bpf-maps\") on node \"10.0.0.109\" DevicePath \"\"" Feb 9 18:40:12.994439 kubelet[1402]: I0209 18:40:12.994389 1402 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/92a00ba7-eb51-4b3b-a850-83a114f07f2b-host-proc-sys-kernel\") on node \"10.0.0.109\" DevicePath \"\"" Feb 9 18:40:12.994439 kubelet[1402]: I0209 18:40:12.994399 1402 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/92a00ba7-eb51-4b3b-a850-83a114f07f2b-cilium-ipsec-secrets\") on node \"10.0.0.109\" DevicePath \"\"" Feb 9 18:40:12.994439 kubelet[1402]: I0209 18:40:12.994409 1402 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/92a00ba7-eb51-4b3b-a850-83a114f07f2b-cilium-run\") on node \"10.0.0.109\" DevicePath \"\"" Feb 9 18:40:12.994439 kubelet[1402]: I0209 18:40:12.994430 1402 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92a00ba7-eb51-4b3b-a850-83a114f07f2b-xtables-lock\") on node \"10.0.0.109\" DevicePath \"\"" Feb 9 18:40:12.994439 kubelet[1402]: I0209 18:40:12.994440 1402 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92a00ba7-eb51-4b3b-a850-83a114f07f2b-cilium-config-path\") on node \"10.0.0.109\" DevicePath \"\"" Feb 9 18:40:12.994677 kubelet[1402]: I0209 18:40:12.994449 1402 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/92a00ba7-eb51-4b3b-a850-83a114f07f2b-cilium-cgroup\") on node \"10.0.0.109\" DevicePath \"\"" Feb 9 18:40:12.994677 kubelet[1402]: I0209 18:40:12.994459 1402 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92a00ba7-eb51-4b3b-a850-83a114f07f2b-lib-modules\") on node \"10.0.0.109\" DevicePath \"\"" Feb 9 18:40:12.994677 kubelet[1402]: I0209 18:40:12.994468 1402 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/92a00ba7-eb51-4b3b-a850-83a114f07f2b-host-proc-sys-net\") on node \"10.0.0.109\" DevicePath \"\"" Feb 9 18:40:12.994677 kubelet[1402]: I0209 18:40:12.994476 1402 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/92a00ba7-eb51-4b3b-a850-83a114f07f2b-etc-cni-netd\") on node \"10.0.0.109\" DevicePath \"\"" Feb 9 18:40:12.994677 kubelet[1402]: I0209 18:40:12.994485 1402 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/92a00ba7-eb51-4b3b-a850-83a114f07f2b-hostproc\") on node \"10.0.0.109\" DevicePath \"\"" Feb 9 18:40:12.994677 kubelet[1402]: I0209 18:40:12.994495 1402 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-6jkk8\" (UniqueName: \"kubernetes.io/projected/92a00ba7-eb51-4b3b-a850-83a114f07f2b-kube-api-access-6jkk8\") on node \"10.0.0.109\" DevicePath \"\"" Feb 9 18:40:12.994677 kubelet[1402]: I0209 18:40:12.994503 1402 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/92a00ba7-eb51-4b3b-a850-83a114f07f2b-cni-path\") on node \"10.0.0.109\" DevicePath \"\"" Feb 9 18:40:12.994677 kubelet[1402]: I0209 18:40:12.994512 1402 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/92a00ba7-eb51-4b3b-a850-83a114f07f2b-clustermesh-secrets\") on node \"10.0.0.109\" DevicePath \"\"" Feb 9 18:40:13.411432 env[1142]: time="2024-02-09T18:40:13.411369965Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:13.412642 env[1142]: time="2024-02-09T18:40:13.412605074Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:13.414283 env[1142]: time="2024-02-09T18:40:13.414252194Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:13.414766 env[1142]: time="2024-02-09T18:40:13.414725326Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 9 18:40:13.416668 env[1142]: time="2024-02-09T18:40:13.416622211Z" level=info msg="CreateContainer within sandbox \"51c9e4f0b3ee9122c25bd60734ba28dabbd19cabd28b81648c1a0369ceb38c61\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 18:40:13.426012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3309865992.mount: Deactivated successfully. Feb 9 18:40:13.429809 env[1142]: time="2024-02-09T18:40:13.429773928Z" level=info msg="CreateContainer within sandbox \"51c9e4f0b3ee9122c25bd60734ba28dabbd19cabd28b81648c1a0369ceb38c61\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3f8a7865745f7f9c5bddb55fc5a62eb7b1e54279628b3960e0b3c23564063355\"" Feb 9 18:40:13.430550 env[1142]: time="2024-02-09T18:40:13.430512626Z" level=info msg="StartContainer for \"3f8a7865745f7f9c5bddb55fc5a62eb7b1e54279628b3960e0b3c23564063355\"" Feb 9 18:40:13.445054 systemd[1]: Started cri-containerd-3f8a7865745f7f9c5bddb55fc5a62eb7b1e54279628b3960e0b3c23564063355.scope. Feb 9 18:40:13.476879 env[1142]: time="2024-02-09T18:40:13.476836742Z" level=info msg="StartContainer for \"3f8a7865745f7f9c5bddb55fc5a62eb7b1e54279628b3960e0b3c23564063355\" returns successfully" Feb 9 18:40:13.491049 kubelet[1402]: E0209 18:40:13.491018 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:13.742075 kubelet[1402]: I0209 18:40:13.741764 1402 scope.go:115] "RemoveContainer" containerID="4a2bd7691948eecf630c83a253f0606f208dda9e42e7a8a3b5ba4ee052a8bde4" Feb 9 18:40:13.744014 env[1142]: time="2024-02-09T18:40:13.743357125Z" level=info msg="RemoveContainer for \"4a2bd7691948eecf630c83a253f0606f208dda9e42e7a8a3b5ba4ee052a8bde4\"" Feb 9 18:40:13.744253 kubelet[1402]: E0209 18:40:13.743832 1402 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:13.746283 systemd[1]: Removed slice kubepods-burstable-pod92a00ba7_eb51_4b3b_a850_83a114f07f2b.slice. Feb 9 18:40:13.795106 kubelet[1402]: I0209 18:40:13.795071 1402 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:40:13.795254 kubelet[1402]: E0209 18:40:13.795126 1402 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="92a00ba7-eb51-4b3b-a850-83a114f07f2b" containerName="mount-cgroup" Feb 9 18:40:13.795254 kubelet[1402]: I0209 18:40:13.795148 1402 memory_manager.go:346] "RemoveStaleState removing state" podUID="92a00ba7-eb51-4b3b-a850-83a114f07f2b" containerName="mount-cgroup" Feb 9 18:40:13.796530 env[1142]: time="2024-02-09T18:40:13.796448724Z" level=info msg="RemoveContainer for \"4a2bd7691948eecf630c83a253f0606f208dda9e42e7a8a3b5ba4ee052a8bde4\" returns successfully" Feb 9 18:40:13.800707 systemd[1]: Created slice kubepods-burstable-pod0a5d605e_5879_4658_ae3f_a2cee80bb2d7.slice. Feb 9 18:40:13.899522 kubelet[1402]: I0209 18:40:13.899484 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0a5d605e-5879-4658-ae3f-a2cee80bb2d7-cilium-run\") pod \"cilium-7jlzt\" (UID: \"0a5d605e-5879-4658-ae3f-a2cee80bb2d7\") " pod="kube-system/cilium-7jlzt" Feb 9 18:40:13.899683 kubelet[1402]: I0209 18:40:13.899550 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0a5d605e-5879-4658-ae3f-a2cee80bb2d7-cilium-cgroup\") pod \"cilium-7jlzt\" (UID: \"0a5d605e-5879-4658-ae3f-a2cee80bb2d7\") " pod="kube-system/cilium-7jlzt" Feb 9 18:40:13.899683 kubelet[1402]: I0209 18:40:13.899594 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0a5d605e-5879-4658-ae3f-a2cee80bb2d7-etc-cni-netd\") pod \"cilium-7jlzt\" (UID: \"0a5d605e-5879-4658-ae3f-a2cee80bb2d7\") " pod="kube-system/cilium-7jlzt" Feb 9 18:40:13.899683 kubelet[1402]: I0209 18:40:13.899616 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a5d605e-5879-4658-ae3f-a2cee80bb2d7-xtables-lock\") pod \"cilium-7jlzt\" (UID: \"0a5d605e-5879-4658-ae3f-a2cee80bb2d7\") " pod="kube-system/cilium-7jlzt" Feb 9 18:40:13.899683 kubelet[1402]: I0209 18:40:13.899638 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0a5d605e-5879-4658-ae3f-a2cee80bb2d7-host-proc-sys-kernel\") pod \"cilium-7jlzt\" (UID: \"0a5d605e-5879-4658-ae3f-a2cee80bb2d7\") " pod="kube-system/cilium-7jlzt" Feb 9 18:40:13.899683 kubelet[1402]: I0209 18:40:13.899662 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0a5d605e-5879-4658-ae3f-a2cee80bb2d7-hostproc\") pod \"cilium-7jlzt\" (UID: \"0a5d605e-5879-4658-ae3f-a2cee80bb2d7\") " pod="kube-system/cilium-7jlzt" Feb 9 18:40:13.899683 kubelet[1402]: I0209 18:40:13.899682 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0a5d605e-5879-4658-ae3f-a2cee80bb2d7-clustermesh-secrets\") pod \"cilium-7jlzt\" (UID: \"0a5d605e-5879-4658-ae3f-a2cee80bb2d7\") " pod="kube-system/cilium-7jlzt" Feb 9 18:40:13.899841 kubelet[1402]: I0209 18:40:13.899702 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0a5d605e-5879-4658-ae3f-a2cee80bb2d7-host-proc-sys-net\") pod \"cilium-7jlzt\" (UID: \"0a5d605e-5879-4658-ae3f-a2cee80bb2d7\") " pod="kube-system/cilium-7jlzt" Feb 9 18:40:13.899841 kubelet[1402]: I0209 18:40:13.899724 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0a5d605e-5879-4658-ae3f-a2cee80bb2d7-cni-path\") pod \"cilium-7jlzt\" (UID: \"0a5d605e-5879-4658-ae3f-a2cee80bb2d7\") " pod="kube-system/cilium-7jlzt" Feb 9 18:40:13.899841 kubelet[1402]: I0209 18:40:13.899743 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a5d605e-5879-4658-ae3f-a2cee80bb2d7-lib-modules\") pod \"cilium-7jlzt\" (UID: \"0a5d605e-5879-4658-ae3f-a2cee80bb2d7\") " pod="kube-system/cilium-7jlzt" Feb 9 18:40:13.899841 kubelet[1402]: I0209 18:40:13.899765 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0a5d605e-5879-4658-ae3f-a2cee80bb2d7-cilium-config-path\") pod \"cilium-7jlzt\" (UID: \"0a5d605e-5879-4658-ae3f-a2cee80bb2d7\") " pod="kube-system/cilium-7jlzt" Feb 9 18:40:13.899841 kubelet[1402]: I0209 18:40:13.899784 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0a5d605e-5879-4658-ae3f-a2cee80bb2d7-hubble-tls\") pod \"cilium-7jlzt\" (UID: \"0a5d605e-5879-4658-ae3f-a2cee80bb2d7\") " pod="kube-system/cilium-7jlzt" Feb 9 18:40:13.899841 kubelet[1402]: I0209 18:40:13.899805 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vjgb\" (UniqueName: \"kubernetes.io/projected/0a5d605e-5879-4658-ae3f-a2cee80bb2d7-kube-api-access-8vjgb\") pod \"cilium-7jlzt\" (UID: \"0a5d605e-5879-4658-ae3f-a2cee80bb2d7\") " pod="kube-system/cilium-7jlzt" Feb 9 18:40:13.899994 kubelet[1402]: I0209 18:40:13.899840 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0a5d605e-5879-4658-ae3f-a2cee80bb2d7-cilium-ipsec-secrets\") pod \"cilium-7jlzt\" (UID: \"0a5d605e-5879-4658-ae3f-a2cee80bb2d7\") " pod="kube-system/cilium-7jlzt" Feb 9 18:40:13.899994 kubelet[1402]: I0209 18:40:13.899862 1402 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0a5d605e-5879-4658-ae3f-a2cee80bb2d7-bpf-maps\") pod \"cilium-7jlzt\" (UID: \"0a5d605e-5879-4658-ae3f-a2cee80bb2d7\") " pod="kube-system/cilium-7jlzt" Feb 9 18:40:14.202228 kubelet[1402]: I0209 18:40:14.202188 1402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-q7jzb" podStartSLOduration=-9.223372033652624e+09 pod.CreationTimestamp="2024-02-09 18:40:11 +0000 UTC" firstStartedPulling="2024-02-09 18:40:12.178145905 +0000 UTC m=+68.842152758" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:40:13.796400203 +0000 UTC m=+70.460407056" watchObservedRunningTime="2024-02-09 18:40:14.202152045 +0000 UTC m=+70.866158898" Feb 9 18:40:14.411464 kubelet[1402]: E0209 18:40:14.411435 1402 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:14.411928 env[1142]: time="2024-02-09T18:40:14.411887358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7jlzt,Uid:0a5d605e-5879-4658-ae3f-a2cee80bb2d7,Namespace:kube-system,Attempt:0,}" Feb 9 18:40:14.423194 env[1142]: time="2024-02-09T18:40:14.423123664Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:40:14.423194 env[1142]: time="2024-02-09T18:40:14.423167545Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:40:14.423194 env[1142]: time="2024-02-09T18:40:14.423180865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:40:14.423367 env[1142]: time="2024-02-09T18:40:14.423295588Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/96203ac1fa92d24b67f0fb90c097ab26a6252209dbca3b7bd89de2148df6700d pid=3258 runtime=io.containerd.runc.v2 Feb 9 18:40:14.434156 systemd[1]: Started cri-containerd-96203ac1fa92d24b67f0fb90c097ab26a6252209dbca3b7bd89de2148df6700d.scope. Feb 9 18:40:14.465659 env[1142]: time="2024-02-09T18:40:14.465547386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7jlzt,Uid:0a5d605e-5879-4658-ae3f-a2cee80bb2d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"96203ac1fa92d24b67f0fb90c097ab26a6252209dbca3b7bd89de2148df6700d\"" Feb 9 18:40:14.466348 kubelet[1402]: E0209 18:40:14.466306 1402 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:14.468451 env[1142]: time="2024-02-09T18:40:14.468392573Z" level=info msg="CreateContainer within sandbox \"96203ac1fa92d24b67f0fb90c097ab26a6252209dbca3b7bd89de2148df6700d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 18:40:14.477693 env[1142]: time="2024-02-09T18:40:14.477642871Z" level=info msg="CreateContainer within sandbox \"96203ac1fa92d24b67f0fb90c097ab26a6252209dbca3b7bd89de2148df6700d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6d1036a03494740cc3830578bfcd1a9d22c5b471356279e64cf1b3ed0f66e5cd\"" Feb 9 18:40:14.478210 env[1142]: time="2024-02-09T18:40:14.478175684Z" level=info msg="StartContainer for \"6d1036a03494740cc3830578bfcd1a9d22c5b471356279e64cf1b3ed0f66e5cd\"" Feb 9 18:40:14.496895 kubelet[1402]: E0209 18:40:14.491494 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:14.491616 systemd[1]: Started cri-containerd-6d1036a03494740cc3830578bfcd1a9d22c5b471356279e64cf1b3ed0f66e5cd.scope. Feb 9 18:40:14.505154 kubelet[1402]: E0209 18:40:14.505126 1402 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 18:40:14.527448 env[1142]: time="2024-02-09T18:40:14.527390566Z" level=info msg="StartContainer for \"6d1036a03494740cc3830578bfcd1a9d22c5b471356279e64cf1b3ed0f66e5cd\" returns successfully" Feb 9 18:40:14.536270 systemd[1]: cri-containerd-6d1036a03494740cc3830578bfcd1a9d22c5b471356279e64cf1b3ed0f66e5cd.scope: Deactivated successfully. Feb 9 18:40:14.553996 env[1142]: time="2024-02-09T18:40:14.553957194Z" level=info msg="shim disconnected" id=6d1036a03494740cc3830578bfcd1a9d22c5b471356279e64cf1b3ed0f66e5cd Feb 9 18:40:14.554191 env[1142]: time="2024-02-09T18:40:14.554172559Z" level=warning msg="cleaning up after shim disconnected" id=6d1036a03494740cc3830578bfcd1a9d22c5b471356279e64cf1b3ed0f66e5cd namespace=k8s.io Feb 9 18:40:14.554257 env[1142]: time="2024-02-09T18:40:14.554243441Z" level=info msg="cleaning up dead shim" Feb 9 18:40:14.560850 env[1142]: time="2024-02-09T18:40:14.560812156Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:40:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3344 runtime=io.containerd.runc.v2\n" Feb 9 18:40:14.617135 kubelet[1402]: I0209 18:40:14.617108 1402 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=92a00ba7-eb51-4b3b-a850-83a114f07f2b path="/var/lib/kubelet/pods/92a00ba7-eb51-4b3b-a850-83a114f07f2b/volumes" Feb 9 18:40:14.748956 kubelet[1402]: E0209 18:40:14.748194 1402 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:14.748956 kubelet[1402]: E0209 18:40:14.748263 1402 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:14.750025 env[1142]: time="2024-02-09T18:40:14.749990144Z" level=info msg="CreateContainer within sandbox \"96203ac1fa92d24b67f0fb90c097ab26a6252209dbca3b7bd89de2148df6700d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 18:40:14.759661 env[1142]: time="2024-02-09T18:40:14.759618091Z" level=info msg="CreateContainer within sandbox \"96203ac1fa92d24b67f0fb90c097ab26a6252209dbca3b7bd89de2148df6700d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d21053050a011b66ac93aa3ad1e7a092b47e258411b809b639dfc115e5dae107\"" Feb 9 18:40:14.760711 env[1142]: time="2024-02-09T18:40:14.760685036Z" level=info msg="StartContainer for \"d21053050a011b66ac93aa3ad1e7a092b47e258411b809b639dfc115e5dae107\"" Feb 9 18:40:14.773999 systemd[1]: Started cri-containerd-d21053050a011b66ac93aa3ad1e7a092b47e258411b809b639dfc115e5dae107.scope. Feb 9 18:40:14.804824 env[1142]: time="2024-02-09T18:40:14.804780838Z" level=info msg="StartContainer for \"d21053050a011b66ac93aa3ad1e7a092b47e258411b809b639dfc115e5dae107\" returns successfully" Feb 9 18:40:14.811606 systemd[1]: cri-containerd-d21053050a011b66ac93aa3ad1e7a092b47e258411b809b639dfc115e5dae107.scope: Deactivated successfully. Feb 9 18:40:14.828317 env[1142]: time="2024-02-09T18:40:14.828278553Z" level=info msg="shim disconnected" id=d21053050a011b66ac93aa3ad1e7a092b47e258411b809b639dfc115e5dae107 Feb 9 18:40:14.828317 env[1142]: time="2024-02-09T18:40:14.828316754Z" level=warning msg="cleaning up after shim disconnected" id=d21053050a011b66ac93aa3ad1e7a092b47e258411b809b639dfc115e5dae107 namespace=k8s.io Feb 9 18:40:14.828497 env[1142]: time="2024-02-09T18:40:14.828325234Z" level=info msg="cleaning up dead shim" Feb 9 18:40:14.834586 env[1142]: time="2024-02-09T18:40:14.834550341Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:40:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3407 runtime=io.containerd.runc.v2\n" Feb 9 18:40:15.401322 kubelet[1402]: W0209 18:40:15.401260 1402 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92a00ba7_eb51_4b3b_a850_83a114f07f2b.slice/cri-containerd-4a2bd7691948eecf630c83a253f0606f208dda9e42e7a8a3b5ba4ee052a8bde4.scope WatchSource:0}: container "4a2bd7691948eecf630c83a253f0606f208dda9e42e7a8a3b5ba4ee052a8bde4" in namespace "k8s.io": not found Feb 9 18:40:15.492300 kubelet[1402]: E0209 18:40:15.492264 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:15.751608 kubelet[1402]: E0209 18:40:15.751499 1402 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:15.754168 env[1142]: time="2024-02-09T18:40:15.754129843Z" level=info msg="CreateContainer within sandbox \"96203ac1fa92d24b67f0fb90c097ab26a6252209dbca3b7bd89de2148df6700d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 18:40:15.768823 env[1142]: time="2024-02-09T18:40:15.768770942Z" level=info msg="CreateContainer within sandbox \"96203ac1fa92d24b67f0fb90c097ab26a6252209dbca3b7bd89de2148df6700d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6f577a6437dba240645b9f41208e9891d901c0037e7657e91de86a8317e98f5b\"" Feb 9 18:40:15.769608 env[1142]: time="2024-02-09T18:40:15.769571721Z" level=info msg="StartContainer for \"6f577a6437dba240645b9f41208e9891d901c0037e7657e91de86a8317e98f5b\"" Feb 9 18:40:15.796568 systemd[1]: Started cri-containerd-6f577a6437dba240645b9f41208e9891d901c0037e7657e91de86a8317e98f5b.scope. Feb 9 18:40:15.830119 systemd[1]: cri-containerd-6f577a6437dba240645b9f41208e9891d901c0037e7657e91de86a8317e98f5b.scope: Deactivated successfully. Feb 9 18:40:15.830830 env[1142]: time="2024-02-09T18:40:15.830703617Z" level=info msg="StartContainer for \"6f577a6437dba240645b9f41208e9891d901c0037e7657e91de86a8317e98f5b\" returns successfully" Feb 9 18:40:15.849578 env[1142]: time="2024-02-09T18:40:15.849536533Z" level=info msg="shim disconnected" id=6f577a6437dba240645b9f41208e9891d901c0037e7657e91de86a8317e98f5b Feb 9 18:40:15.849735 env[1142]: time="2024-02-09T18:40:15.849578734Z" level=warning msg="cleaning up after shim disconnected" id=6f577a6437dba240645b9f41208e9891d901c0037e7657e91de86a8317e98f5b namespace=k8s.io Feb 9 18:40:15.849735 env[1142]: time="2024-02-09T18:40:15.849597615Z" level=info msg="cleaning up dead shim" Feb 9 18:40:15.856087 env[1142]: time="2024-02-09T18:40:15.856049924Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:40:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3462 runtime=io.containerd.runc.v2\n" Feb 9 18:40:15.991068 systemd[1]: run-containerd-runc-k8s.io-6f577a6437dba240645b9f41208e9891d901c0037e7657e91de86a8317e98f5b-runc.ojXPNI.mount: Deactivated successfully. Feb 9 18:40:15.991174 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f577a6437dba240645b9f41208e9891d901c0037e7657e91de86a8317e98f5b-rootfs.mount: Deactivated successfully. Feb 9 18:40:16.492678 kubelet[1402]: E0209 18:40:16.492629 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:16.755175 kubelet[1402]: E0209 18:40:16.754981 1402 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:16.757786 env[1142]: time="2024-02-09T18:40:16.757657414Z" level=info msg="CreateContainer within sandbox \"96203ac1fa92d24b67f0fb90c097ab26a6252209dbca3b7bd89de2148df6700d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 18:40:16.772637 env[1142]: time="2024-02-09T18:40:16.772520432Z" level=info msg="CreateContainer within sandbox \"96203ac1fa92d24b67f0fb90c097ab26a6252209dbca3b7bd89de2148df6700d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9e5b3635af0d5c13d2fe0a1c1e2b8002060fbf31ff9ab4db1dcbd23fcdef7fc9\"" Feb 9 18:40:16.773339 env[1142]: time="2024-02-09T18:40:16.773285570Z" level=info msg="StartContainer for \"9e5b3635af0d5c13d2fe0a1c1e2b8002060fbf31ff9ab4db1dcbd23fcdef7fc9\"" Feb 9 18:40:16.790547 systemd[1]: Started cri-containerd-9e5b3635af0d5c13d2fe0a1c1e2b8002060fbf31ff9ab4db1dcbd23fcdef7fc9.scope. Feb 9 18:40:16.818293 systemd[1]: cri-containerd-9e5b3635af0d5c13d2fe0a1c1e2b8002060fbf31ff9ab4db1dcbd23fcdef7fc9.scope: Deactivated successfully. Feb 9 18:40:16.820094 env[1142]: time="2024-02-09T18:40:16.820010193Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a5d605e_5879_4658_ae3f_a2cee80bb2d7.slice/cri-containerd-9e5b3635af0d5c13d2fe0a1c1e2b8002060fbf31ff9ab4db1dcbd23fcdef7fc9.scope/memory.events\": no such file or directory" Feb 9 18:40:16.821846 env[1142]: time="2024-02-09T18:40:16.821795473Z" level=info msg="StartContainer for \"9e5b3635af0d5c13d2fe0a1c1e2b8002060fbf31ff9ab4db1dcbd23fcdef7fc9\" returns successfully" Feb 9 18:40:16.841286 env[1142]: time="2024-02-09T18:40:16.841212315Z" level=info msg="shim disconnected" id=9e5b3635af0d5c13d2fe0a1c1e2b8002060fbf31ff9ab4db1dcbd23fcdef7fc9 Feb 9 18:40:16.841286 env[1142]: time="2024-02-09T18:40:16.841262876Z" level=warning msg="cleaning up after shim disconnected" id=9e5b3635af0d5c13d2fe0a1c1e2b8002060fbf31ff9ab4db1dcbd23fcdef7fc9 namespace=k8s.io Feb 9 18:40:16.841286 env[1142]: time="2024-02-09T18:40:16.841272236Z" level=info msg="cleaning up dead shim" Feb 9 18:40:16.848516 env[1142]: time="2024-02-09T18:40:16.848466480Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:40:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3517 runtime=io.containerd.runc.v2\n" Feb 9 18:40:16.991203 systemd[1]: run-containerd-runc-k8s.io-9e5b3635af0d5c13d2fe0a1c1e2b8002060fbf31ff9ab4db1dcbd23fcdef7fc9-runc.DNauix.mount: Deactivated successfully. Feb 9 18:40:16.991308 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e5b3635af0d5c13d2fe0a1c1e2b8002060fbf31ff9ab4db1dcbd23fcdef7fc9-rootfs.mount: Deactivated successfully. Feb 9 18:40:17.493494 kubelet[1402]: E0209 18:40:17.493444 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:17.759719 kubelet[1402]: E0209 18:40:17.759484 1402 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:17.761575 env[1142]: time="2024-02-09T18:40:17.761534515Z" level=info msg="CreateContainer within sandbox \"96203ac1fa92d24b67f0fb90c097ab26a6252209dbca3b7bd89de2148df6700d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 18:40:17.774519 env[1142]: time="2024-02-09T18:40:17.774464644Z" level=info msg="CreateContainer within sandbox \"96203ac1fa92d24b67f0fb90c097ab26a6252209dbca3b7bd89de2148df6700d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c0bfe2bcb181b263deee5b2c042f793071e7ff713b51efc876e1c080d359eea7\"" Feb 9 18:40:17.774923 env[1142]: time="2024-02-09T18:40:17.774901533Z" level=info msg="StartContainer for \"c0bfe2bcb181b263deee5b2c042f793071e7ff713b51efc876e1c080d359eea7\"" Feb 9 18:40:17.797473 systemd[1]: Started cri-containerd-c0bfe2bcb181b263deee5b2c042f793071e7ff713b51efc876e1c080d359eea7.scope. Feb 9 18:40:17.834117 env[1142]: time="2024-02-09T18:40:17.834049775Z" level=info msg="StartContainer for \"c0bfe2bcb181b263deee5b2c042f793071e7ff713b51efc876e1c080d359eea7\" returns successfully" Feb 9 18:40:17.991207 systemd[1]: run-containerd-runc-k8s.io-c0bfe2bcb181b263deee5b2c042f793071e7ff713b51efc876e1c080d359eea7-runc.bJbvOx.mount: Deactivated successfully. Feb 9 18:40:18.100442 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 9 18:40:18.452881 kubelet[1402]: I0209 18:40:18.452764 1402 setters.go:548] "Node became not ready" node="10.0.0.109" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 18:40:18.452707556 +0000 UTC m=+75.116714409 LastTransitionTime:2024-02-09 18:40:18.452707556 +0000 UTC m=+75.116714409 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 18:40:18.493772 kubelet[1402]: E0209 18:40:18.493733 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:18.518312 kubelet[1402]: W0209 18:40:18.518273 1402 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a5d605e_5879_4658_ae3f_a2cee80bb2d7.slice/cri-containerd-6d1036a03494740cc3830578bfcd1a9d22c5b471356279e64cf1b3ed0f66e5cd.scope WatchSource:0}: task 6d1036a03494740cc3830578bfcd1a9d22c5b471356279e64cf1b3ed0f66e5cd not found: not found Feb 9 18:40:18.763978 kubelet[1402]: E0209 18:40:18.763672 1402 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:18.776239 kubelet[1402]: I0209 18:40:18.776190 1402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-7jlzt" podStartSLOduration=5.776156786 pod.CreationTimestamp="2024-02-09 18:40:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:40:18.775785017 +0000 UTC m=+75.439791870" watchObservedRunningTime="2024-02-09 18:40:18.776156786 +0000 UTC m=+75.440163639" Feb 9 18:40:19.494283 kubelet[1402]: E0209 18:40:19.494225 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:19.765721 kubelet[1402]: E0209 18:40:19.765617 1402 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:20.494682 kubelet[1402]: E0209 18:40:20.494633 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:20.730490 systemd-networkd[1040]: lxc_health: Link UP Feb 9 18:40:20.743450 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 18:40:20.743403 systemd-networkd[1040]: lxc_health: Gained carrier Feb 9 18:40:20.768091 kubelet[1402]: E0209 18:40:20.767674 1402 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:21.495739 kubelet[1402]: E0209 18:40:21.495692 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:21.625233 kubelet[1402]: W0209 18:40:21.625183 1402 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a5d605e_5879_4658_ae3f_a2cee80bb2d7.slice/cri-containerd-d21053050a011b66ac93aa3ad1e7a092b47e258411b809b639dfc115e5dae107.scope WatchSource:0}: task d21053050a011b66ac93aa3ad1e7a092b47e258411b809b639dfc115e5dae107 not found: not found Feb 9 18:40:22.413430 kubelet[1402]: E0209 18:40:22.413377 1402 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:22.496136 kubelet[1402]: E0209 18:40:22.496081 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:22.770871 kubelet[1402]: E0209 18:40:22.770755 1402 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:22.799561 systemd-networkd[1040]: lxc_health: Gained IPv6LL Feb 9 18:40:23.496646 kubelet[1402]: E0209 18:40:23.496587 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:23.772629 kubelet[1402]: E0209 18:40:23.772528 1402 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:40:24.442938 kubelet[1402]: E0209 18:40:24.442890 1402 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:24.497146 kubelet[1402]: E0209 18:40:24.497114 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:24.732463 kubelet[1402]: W0209 18:40:24.732330 1402 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a5d605e_5879_4658_ae3f_a2cee80bb2d7.slice/cri-containerd-6f577a6437dba240645b9f41208e9891d901c0037e7657e91de86a8317e98f5b.scope WatchSource:0}: task 6f577a6437dba240645b9f41208e9891d901c0037e7657e91de86a8317e98f5b not found: not found Feb 9 18:40:25.497404 kubelet[1402]: E0209 18:40:25.497348 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:26.498271 kubelet[1402]: E0209 18:40:26.498173 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:26.697021 systemd[1]: run-containerd-runc-k8s.io-c0bfe2bcb181b263deee5b2c042f793071e7ff713b51efc876e1c080d359eea7-runc.BH57VU.mount: Deactivated successfully. Feb 9 18:40:27.498686 kubelet[1402]: E0209 18:40:27.498616 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:27.839040 kubelet[1402]: W0209 18:40:27.838997 1402 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a5d605e_5879_4658_ae3f_a2cee80bb2d7.slice/cri-containerd-9e5b3635af0d5c13d2fe0a1c1e2b8002060fbf31ff9ab4db1dcbd23fcdef7fc9.scope WatchSource:0}: task 9e5b3635af0d5c13d2fe0a1c1e2b8002060fbf31ff9ab4db1dcbd23fcdef7fc9 not found: not found Feb 9 18:40:28.499274 kubelet[1402]: E0209 18:40:28.499228 1402 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"