Feb 9 09:56:50.726234 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 9 09:56:50.726254 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 08:56:26 -00 2024 Feb 9 09:56:50.726262 kernel: efi: EFI v2.70 by EDK II Feb 9 09:56:50.726268 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Feb 9 09:56:50.726273 kernel: random: crng init done Feb 9 09:56:50.726278 kernel: ACPI: Early table checksum verification disabled Feb 9 09:56:50.726284 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Feb 9 09:56:50.726291 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 9 09:56:50.726296 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:56:50.726302 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:56:50.726307 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:56:50.726312 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:56:50.726317 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:56:50.726343 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:56:50.726352 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:56:50.726358 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:56:50.726363 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:56:50.726369 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 9 09:56:50.726375 kernel: NUMA: Failed to initialise from firmware Feb 9 09:56:50.726381 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 09:56:50.726386 kernel: NUMA: NODE_DATA [mem 0xdcb08900-0xdcb0dfff] Feb 9 09:56:50.726392 kernel: Zone ranges: Feb 9 09:56:50.726397 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 09:56:50.726404 kernel: DMA32 empty Feb 9 09:56:50.726409 kernel: Normal empty Feb 9 09:56:50.726415 kernel: Movable zone start for each node Feb 9 09:56:50.726420 kernel: Early memory node ranges Feb 9 09:56:50.726426 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Feb 9 09:56:50.726432 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Feb 9 09:56:50.726437 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Feb 9 09:56:50.726443 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Feb 9 09:56:50.726449 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Feb 9 09:56:50.726454 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Feb 9 09:56:50.726460 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Feb 9 09:56:50.726465 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 09:56:50.726472 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 9 09:56:50.726478 kernel: psci: probing for conduit method from ACPI. Feb 9 09:56:50.726483 kernel: psci: PSCIv1.1 detected in firmware. Feb 9 09:56:50.726488 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 09:56:50.726494 kernel: psci: Trusted OS migration not required Feb 9 09:56:50.726502 kernel: psci: SMC Calling Convention v1.1 Feb 9 09:56:50.726508 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 9 09:56:50.726515 kernel: ACPI: SRAT not present Feb 9 09:56:50.726522 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 09:56:50.726528 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 09:56:50.726534 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 9 09:56:50.726540 kernel: Detected PIPT I-cache on CPU0 Feb 9 09:56:50.726546 kernel: CPU features: detected: GIC system register CPU interface Feb 9 09:56:50.726552 kernel: CPU features: detected: Hardware dirty bit management Feb 9 09:56:50.726558 kernel: CPU features: detected: Spectre-v4 Feb 9 09:56:50.726564 kernel: CPU features: detected: Spectre-BHB Feb 9 09:56:50.726571 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 09:56:50.726583 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 09:56:50.726590 kernel: CPU features: detected: ARM erratum 1418040 Feb 9 09:56:50.726596 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 9 09:56:50.726602 kernel: Policy zone: DMA Feb 9 09:56:50.726609 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 09:56:50.726615 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 09:56:50.726621 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 09:56:50.726627 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 09:56:50.726633 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 09:56:50.726640 kernel: Memory: 2459140K/2572288K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 113148K reserved, 0K cma-reserved) Feb 9 09:56:50.726647 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 9 09:56:50.726653 kernel: trace event string verifier disabled Feb 9 09:56:50.726659 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 09:56:50.726665 kernel: rcu: RCU event tracing is enabled. Feb 9 09:56:50.726671 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 9 09:56:50.726678 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 09:56:50.726684 kernel: Tracing variant of Tasks RCU enabled. Feb 9 09:56:50.726690 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 09:56:50.726696 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 9 09:56:50.726702 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 09:56:50.726708 kernel: GICv3: 256 SPIs implemented Feb 9 09:56:50.726715 kernel: GICv3: 0 Extended SPIs implemented Feb 9 09:56:50.726721 kernel: GICv3: Distributor has no Range Selector support Feb 9 09:56:50.726727 kernel: Root IRQ handler: gic_handle_irq Feb 9 09:56:50.726733 kernel: GICv3: 16 PPIs implemented Feb 9 09:56:50.726742 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 9 09:56:50.726748 kernel: ACPI: SRAT not present Feb 9 09:56:50.726754 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 9 09:56:50.726760 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Feb 9 09:56:50.726766 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Feb 9 09:56:50.726772 kernel: GICv3: using LPI property table @0x00000000400d0000 Feb 9 09:56:50.726778 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Feb 9 09:56:50.726784 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:56:50.726791 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 9 09:56:50.726798 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 9 09:56:50.726804 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 9 09:56:50.726810 kernel: arm-pv: using stolen time PV Feb 9 09:56:50.726816 kernel: Console: colour dummy device 80x25 Feb 9 09:56:50.726822 kernel: ACPI: Core revision 20210730 Feb 9 09:56:50.726829 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 9 09:56:50.726835 kernel: pid_max: default: 32768 minimum: 301 Feb 9 09:56:50.726841 kernel: LSM: Security Framework initializing Feb 9 09:56:50.726847 kernel: SELinux: Initializing. Feb 9 09:56:50.726854 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 09:56:50.726861 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 09:56:50.726867 kernel: rcu: Hierarchical SRCU implementation. Feb 9 09:56:50.726873 kernel: Platform MSI: ITS@0x8080000 domain created Feb 9 09:56:50.726879 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 9 09:56:50.726885 kernel: Remapping and enabling EFI services. Feb 9 09:56:50.726891 kernel: smp: Bringing up secondary CPUs ... Feb 9 09:56:50.726897 kernel: Detected PIPT I-cache on CPU1 Feb 9 09:56:50.726904 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 9 09:56:50.726911 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Feb 9 09:56:50.726917 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:56:50.726924 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 9 09:56:50.726930 kernel: Detected PIPT I-cache on CPU2 Feb 9 09:56:50.726936 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 9 09:56:50.726942 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Feb 9 09:56:50.726948 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:56:50.726954 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 9 09:56:50.726960 kernel: Detected PIPT I-cache on CPU3 Feb 9 09:56:50.726967 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 9 09:56:50.726974 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Feb 9 09:56:50.726980 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:56:50.726986 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 9 09:56:50.726992 kernel: smp: Brought up 1 node, 4 CPUs Feb 9 09:56:50.727003 kernel: SMP: Total of 4 processors activated. Feb 9 09:56:50.727010 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 09:56:50.727017 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 9 09:56:50.727023 kernel: CPU features: detected: Common not Private translations Feb 9 09:56:50.727030 kernel: CPU features: detected: CRC32 instructions Feb 9 09:56:50.727036 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 9 09:56:50.727043 kernel: CPU features: detected: LSE atomic instructions Feb 9 09:56:50.727049 kernel: CPU features: detected: Privileged Access Never Feb 9 09:56:50.727057 kernel: CPU features: detected: RAS Extension Support Feb 9 09:56:50.727064 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 9 09:56:50.727070 kernel: CPU: All CPU(s) started at EL1 Feb 9 09:56:50.727077 kernel: alternatives: patching kernel code Feb 9 09:56:50.727084 kernel: devtmpfs: initialized Feb 9 09:56:50.727091 kernel: KASLR enabled Feb 9 09:56:50.727097 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 09:56:50.727104 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 9 09:56:50.727110 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 09:56:50.727117 kernel: SMBIOS 3.0.0 present. Feb 9 09:56:50.727123 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Feb 9 09:56:50.727130 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 09:56:50.727136 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 09:56:50.727143 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 09:56:50.727151 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 09:56:50.727157 kernel: audit: initializing netlink subsys (disabled) Feb 9 09:56:50.727164 kernel: audit: type=2000 audit(0.031:1): state=initialized audit_enabled=0 res=1 Feb 9 09:56:50.727170 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 09:56:50.727177 kernel: cpuidle: using governor menu Feb 9 09:56:50.727183 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 09:56:50.727190 kernel: ASID allocator initialised with 32768 entries Feb 9 09:56:50.727196 kernel: ACPI: bus type PCI registered Feb 9 09:56:50.727203 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 09:56:50.727210 kernel: Serial: AMBA PL011 UART driver Feb 9 09:56:50.727216 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 09:56:50.727223 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 09:56:50.727230 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 09:56:50.727236 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 09:56:50.727243 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 09:56:50.727249 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 09:56:50.727256 kernel: ACPI: Added _OSI(Module Device) Feb 9 09:56:50.727262 kernel: ACPI: Added _OSI(Processor Device) Feb 9 09:56:50.727270 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 09:56:50.727276 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 09:56:50.727283 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 09:56:50.727289 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 09:56:50.727296 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 09:56:50.727302 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 09:56:50.727309 kernel: ACPI: Interpreter enabled Feb 9 09:56:50.727315 kernel: ACPI: Using GIC for interrupt routing Feb 9 09:56:50.727332 kernel: ACPI: MCFG table detected, 1 entries Feb 9 09:56:50.727343 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 9 09:56:50.727350 kernel: printk: console [ttyAMA0] enabled Feb 9 09:56:50.727357 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 09:56:50.727473 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 09:56:50.727536 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 9 09:56:50.727602 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 9 09:56:50.727660 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 9 09:56:50.727721 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 9 09:56:50.727730 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 9 09:56:50.727736 kernel: PCI host bridge to bus 0000:00 Feb 9 09:56:50.727801 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 9 09:56:50.727855 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 9 09:56:50.727908 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 9 09:56:50.727960 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 09:56:50.728034 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 9 09:56:50.728105 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 09:56:50.728165 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 9 09:56:50.728224 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 9 09:56:50.728284 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 9 09:56:50.728357 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 9 09:56:50.728418 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 9 09:56:50.728480 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 9 09:56:50.728534 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 9 09:56:50.728593 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 9 09:56:50.728646 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 9 09:56:50.728654 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 9 09:56:50.728662 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 9 09:56:50.728669 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 9 09:56:50.728677 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 9 09:56:50.728684 kernel: iommu: Default domain type: Translated Feb 9 09:56:50.728690 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 09:56:50.728697 kernel: vgaarb: loaded Feb 9 09:56:50.728703 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 09:56:50.728710 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 09:56:50.728716 kernel: PTP clock support registered Feb 9 09:56:50.728723 kernel: Registered efivars operations Feb 9 09:56:50.728729 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 09:56:50.728735 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 09:56:50.728757 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 09:56:50.728764 kernel: pnp: PnP ACPI init Feb 9 09:56:50.728826 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 9 09:56:50.728836 kernel: pnp: PnP ACPI: found 1 devices Feb 9 09:56:50.728842 kernel: NET: Registered PF_INET protocol family Feb 9 09:56:50.728849 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 09:56:50.728855 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 09:56:50.728862 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 09:56:50.728870 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 09:56:50.728877 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 09:56:50.728883 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 09:56:50.728890 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 09:56:50.728897 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 09:56:50.728903 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 09:56:50.728910 kernel: PCI: CLS 0 bytes, default 64 Feb 9 09:56:50.728916 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 9 09:56:50.728924 kernel: kvm [1]: HYP mode not available Feb 9 09:56:50.728930 kernel: Initialise system trusted keyrings Feb 9 09:56:50.728938 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 09:56:50.728945 kernel: Key type asymmetric registered Feb 9 09:56:50.728952 kernel: Asymmetric key parser 'x509' registered Feb 9 09:56:50.728958 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 09:56:50.728965 kernel: io scheduler mq-deadline registered Feb 9 09:56:50.728971 kernel: io scheduler kyber registered Feb 9 09:56:50.728977 kernel: io scheduler bfq registered Feb 9 09:56:50.728984 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 9 09:56:50.728991 kernel: ACPI: button: Power Button [PWRB] Feb 9 09:56:50.728999 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 9 09:56:50.729057 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 9 09:56:50.729066 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 09:56:50.729073 kernel: thunder_xcv, ver 1.0 Feb 9 09:56:50.729079 kernel: thunder_bgx, ver 1.0 Feb 9 09:56:50.729086 kernel: nicpf, ver 1.0 Feb 9 09:56:50.729092 kernel: nicvf, ver 1.0 Feb 9 09:56:50.729156 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 09:56:50.729212 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T09:56:50 UTC (1707472610) Feb 9 09:56:50.729221 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 09:56:50.729227 kernel: NET: Registered PF_INET6 protocol family Feb 9 09:56:50.729234 kernel: Segment Routing with IPv6 Feb 9 09:56:50.729240 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 09:56:50.729247 kernel: NET: Registered PF_PACKET protocol family Feb 9 09:56:50.729253 kernel: Key type dns_resolver registered Feb 9 09:56:50.729260 kernel: registered taskstats version 1 Feb 9 09:56:50.729268 kernel: Loading compiled-in X.509 certificates Feb 9 09:56:50.729275 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: ca91574208414224935c9cea513398977daf917d' Feb 9 09:56:50.729281 kernel: Key type .fscrypt registered Feb 9 09:56:50.729288 kernel: Key type fscrypt-provisioning registered Feb 9 09:56:50.729294 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 09:56:50.729301 kernel: ima: Allocated hash algorithm: sha1 Feb 9 09:56:50.729307 kernel: ima: No architecture policies found Feb 9 09:56:50.729314 kernel: Freeing unused kernel memory: 34688K Feb 9 09:56:50.729329 kernel: Run /init as init process Feb 9 09:56:50.729338 kernel: with arguments: Feb 9 09:56:50.729344 kernel: /init Feb 9 09:56:50.729350 kernel: with environment: Feb 9 09:56:50.729356 kernel: HOME=/ Feb 9 09:56:50.729363 kernel: TERM=linux Feb 9 09:56:50.729369 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 09:56:50.729378 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:56:50.729386 systemd[1]: Detected virtualization kvm. Feb 9 09:56:50.729395 systemd[1]: Detected architecture arm64. Feb 9 09:56:50.729401 systemd[1]: Running in initrd. Feb 9 09:56:50.729408 systemd[1]: No hostname configured, using default hostname. Feb 9 09:56:50.729415 systemd[1]: Hostname set to . Feb 9 09:56:50.729422 systemd[1]: Initializing machine ID from VM UUID. Feb 9 09:56:50.729429 systemd[1]: Queued start job for default target initrd.target. Feb 9 09:56:50.729436 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:56:50.729443 systemd[1]: Reached target cryptsetup.target. Feb 9 09:56:50.729451 systemd[1]: Reached target paths.target. Feb 9 09:56:50.729458 systemd[1]: Reached target slices.target. Feb 9 09:56:50.729465 systemd[1]: Reached target swap.target. Feb 9 09:56:50.729472 systemd[1]: Reached target timers.target. Feb 9 09:56:50.729480 systemd[1]: Listening on iscsid.socket. Feb 9 09:56:50.729487 systemd[1]: Listening on iscsiuio.socket. Feb 9 09:56:50.729494 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 09:56:50.729503 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 09:56:50.729510 systemd[1]: Listening on systemd-journald.socket. Feb 9 09:56:50.729517 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:56:50.729525 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:56:50.729532 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:56:50.729538 systemd[1]: Reached target sockets.target. Feb 9 09:56:50.729546 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:56:50.729553 systemd[1]: Finished network-cleanup.service. Feb 9 09:56:50.729561 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 09:56:50.729569 systemd[1]: Starting systemd-journald.service... Feb 9 09:56:50.729576 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:56:50.729599 systemd[1]: Starting systemd-resolved.service... Feb 9 09:56:50.729606 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 09:56:50.729613 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:56:50.729620 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 09:56:50.729627 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 09:56:50.729634 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 09:56:50.729641 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 09:56:50.729650 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 09:56:50.729659 systemd-journald[289]: Journal started Feb 9 09:56:50.729698 systemd-journald[289]: Runtime Journal (/run/log/journal/8ba6db643a824b86816804ac1c30e5c1) is 6.0M, max 48.7M, 42.6M free. Feb 9 09:56:50.716584 systemd-modules-load[290]: Inserted module 'overlay' Feb 9 09:56:50.731801 systemd[1]: Started systemd-journald.service. Feb 9 09:56:50.735104 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 09:56:50.735137 kernel: audit: type=1130 audit(1707472610.733:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:50.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:50.733568 systemd-resolved[291]: Positive Trust Anchors: Feb 9 09:56:50.733602 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:56:50.738767 kernel: Bridge firewalling registered Feb 9 09:56:50.733631 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:56:50.737600 systemd-modules-load[290]: Inserted module 'br_netfilter' Feb 9 09:56:50.737756 systemd-resolved[291]: Defaulting to hostname 'linux'. Feb 9 09:56:50.743410 systemd[1]: Started systemd-resolved.service. Feb 9 09:56:50.744353 systemd[1]: Reached target nss-lookup.target. Feb 9 09:56:50.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:50.748353 kernel: audit: type=1130 audit(1707472610.743:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:50.748382 kernel: SCSI subsystem initialized Feb 9 09:56:50.749449 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 09:56:50.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:50.750972 systemd[1]: Starting dracut-cmdline.service... Feb 9 09:56:50.753373 kernel: audit: type=1130 audit(1707472610.749:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:50.755393 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 09:56:50.755423 kernel: device-mapper: uevent: version 1.0.3 Feb 9 09:56:50.756502 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 09:56:50.758737 systemd-modules-load[290]: Inserted module 'dm_multipath' Feb 9 09:56:50.759432 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:56:50.760961 dracut-cmdline[308]: dracut-dracut-053 Feb 9 09:56:50.763586 kernel: audit: type=1130 audit(1707472610.760:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:50.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:50.762813 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:56:50.764516 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 09:56:50.768488 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:56:50.771330 kernel: audit: type=1130 audit(1707472610.768:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:50.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:50.819348 kernel: Loading iSCSI transport class v2.0-870. Feb 9 09:56:50.827347 kernel: iscsi: registered transport (tcp) Feb 9 09:56:50.840347 kernel: iscsi: registered transport (qla4xxx) Feb 9 09:56:50.840368 kernel: QLogic iSCSI HBA Driver Feb 9 09:56:50.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:50.872080 systemd[1]: Finished dracut-cmdline.service. Feb 9 09:56:50.875417 kernel: audit: type=1130 audit(1707472610.872:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:50.873428 systemd[1]: Starting dracut-pre-udev.service... Feb 9 09:56:50.916339 kernel: raid6: neonx8 gen() 12164 MB/s Feb 9 09:56:50.933336 kernel: raid6: neonx8 xor() 10801 MB/s Feb 9 09:56:50.950337 kernel: raid6: neonx4 gen() 13542 MB/s Feb 9 09:56:50.967337 kernel: raid6: neonx4 xor() 11011 MB/s Feb 9 09:56:50.984331 kernel: raid6: neonx2 gen() 12940 MB/s Feb 9 09:56:51.001332 kernel: raid6: neonx2 xor() 10245 MB/s Feb 9 09:56:51.018337 kernel: raid6: neonx1 gen() 10451 MB/s Feb 9 09:56:51.035332 kernel: raid6: neonx1 xor() 8739 MB/s Feb 9 09:56:51.052341 kernel: raid6: int64x8 gen() 6282 MB/s Feb 9 09:56:51.069344 kernel: raid6: int64x8 xor() 3539 MB/s Feb 9 09:56:51.086346 kernel: raid6: int64x4 gen() 7250 MB/s Feb 9 09:56:51.103344 kernel: raid6: int64x4 xor() 3840 MB/s Feb 9 09:56:51.120343 kernel: raid6: int64x2 gen() 6143 MB/s Feb 9 09:56:51.137344 kernel: raid6: int64x2 xor() 3309 MB/s Feb 9 09:56:51.154343 kernel: raid6: int64x1 gen() 5039 MB/s Feb 9 09:56:51.171507 kernel: raid6: int64x1 xor() 2642 MB/s Feb 9 09:56:51.171522 kernel: raid6: using algorithm neonx4 gen() 13542 MB/s Feb 9 09:56:51.171531 kernel: raid6: .... xor() 11011 MB/s, rmw enabled Feb 9 09:56:51.171539 kernel: raid6: using neon recovery algorithm Feb 9 09:56:51.182474 kernel: xor: measuring software checksum speed Feb 9 09:56:51.182489 kernel: 8regs : 17293 MB/sec Feb 9 09:56:51.183336 kernel: 32regs : 20760 MB/sec Feb 9 09:56:51.184452 kernel: arm64_neon : 27731 MB/sec Feb 9 09:56:51.184463 kernel: xor: using function: arm64_neon (27731 MB/sec) Feb 9 09:56:51.238349 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 09:56:51.248007 systemd[1]: Finished dracut-pre-udev.service. Feb 9 09:56:51.251512 kernel: audit: type=1130 audit(1707472611.248:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:51.251536 kernel: audit: type=1334 audit(1707472611.250:9): prog-id=7 op=LOAD Feb 9 09:56:51.251546 kernel: audit: type=1334 audit(1707472611.251:10): prog-id=8 op=LOAD Feb 9 09:56:51.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:51.250000 audit: BPF prog-id=7 op=LOAD Feb 9 09:56:51.251000 audit: BPF prog-id=8 op=LOAD Feb 9 09:56:51.251844 systemd[1]: Starting systemd-udevd.service... Feb 9 09:56:51.265669 systemd-udevd[491]: Using default interface naming scheme 'v252'. Feb 9 09:56:51.268961 systemd[1]: Started systemd-udevd.service. Feb 9 09:56:51.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:51.271241 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 09:56:51.281472 dracut-pre-trigger[502]: rd.md=0: removing MD RAID activation Feb 9 09:56:51.306476 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 09:56:51.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:51.307866 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:56:51.343620 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:56:51.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:51.373376 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 9 09:56:51.375610 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 09:56:51.375638 kernel: GPT:9289727 != 19775487 Feb 9 09:56:51.375647 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 09:56:51.375656 kernel: GPT:9289727 != 19775487 Feb 9 09:56:51.376850 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 09:56:51.376865 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 09:56:51.388347 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (551) Feb 9 09:56:51.388794 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 09:56:51.389650 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 09:56:51.398023 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 09:56:51.401088 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 09:56:51.406001 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 09:56:51.407437 systemd[1]: Starting disk-uuid.service... Feb 9 09:56:51.412987 disk-uuid[563]: Primary Header is updated. Feb 9 09:56:51.412987 disk-uuid[563]: Secondary Entries is updated. Feb 9 09:56:51.412987 disk-uuid[563]: Secondary Header is updated. Feb 9 09:56:51.415947 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 09:56:52.428337 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 09:56:52.428525 disk-uuid[564]: The operation has completed successfully. Feb 9 09:56:52.448578 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 09:56:52.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:52.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:52.448682 systemd[1]: Finished disk-uuid.service. Feb 9 09:56:52.452556 systemd[1]: Starting verity-setup.service... Feb 9 09:56:52.470356 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 09:56:52.493182 systemd[1]: Found device dev-mapper-usr.device. Feb 9 09:56:52.495117 systemd[1]: Mounting sysusr-usr.mount... Feb 9 09:56:52.496980 systemd[1]: Finished verity-setup.service. Feb 9 09:56:52.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:52.540346 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 09:56:52.540899 systemd[1]: Mounted sysusr-usr.mount. Feb 9 09:56:52.541555 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 09:56:52.542232 systemd[1]: Starting ignition-setup.service... Feb 9 09:56:52.544240 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 09:56:52.550490 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:56:52.550526 kernel: BTRFS info (device vda6): using free space tree Feb 9 09:56:52.550536 kernel: BTRFS info (device vda6): has skinny extents Feb 9 09:56:52.558364 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 09:56:52.563369 systemd[1]: Finished ignition-setup.service. Feb 9 09:56:52.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:52.564802 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 09:56:52.618459 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 09:56:52.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:52.619000 audit: BPF prog-id=9 op=LOAD Feb 9 09:56:52.620653 systemd[1]: Starting systemd-networkd.service... Feb 9 09:56:52.644900 systemd-networkd[739]: lo: Link UP Feb 9 09:56:52.644915 systemd-networkd[739]: lo: Gained carrier Feb 9 09:56:52.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:52.646902 ignition[647]: Ignition 2.14.0 Feb 9 09:56:52.645260 systemd-networkd[739]: Enumeration completed Feb 9 09:56:52.646909 ignition[647]: Stage: fetch-offline Feb 9 09:56:52.645385 systemd[1]: Started systemd-networkd.service. Feb 9 09:56:52.646946 ignition[647]: no configs at "/usr/lib/ignition/base.d" Feb 9 09:56:52.645440 systemd-networkd[739]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:56:52.646955 ignition[647]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:56:52.646485 systemd-networkd[739]: eth0: Link UP Feb 9 09:56:52.647081 ignition[647]: parsed url from cmdline: "" Feb 9 09:56:52.646488 systemd-networkd[739]: eth0: Gained carrier Feb 9 09:56:52.647085 ignition[647]: no config URL provided Feb 9 09:56:52.646942 systemd[1]: Reached target network.target. Feb 9 09:56:52.647090 ignition[647]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 09:56:52.648858 systemd[1]: Starting iscsiuio.service... Feb 9 09:56:52.647097 ignition[647]: no config at "/usr/lib/ignition/user.ign" Feb 9 09:56:52.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:52.659154 systemd[1]: Started iscsiuio.service. Feb 9 09:56:52.647114 ignition[647]: op(1): [started] loading QEMU firmware config module Feb 9 09:56:52.660926 systemd[1]: Starting iscsid.service... Feb 9 09:56:52.647119 ignition[647]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 9 09:56:52.654420 ignition[647]: op(1): [finished] loading QEMU firmware config module Feb 9 09:56:52.664973 iscsid[746]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:56:52.664973 iscsid[746]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 09:56:52.664973 iscsid[746]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 09:56:52.664973 iscsid[746]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 09:56:52.664973 iscsid[746]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:56:52.664973 iscsid[746]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 09:56:52.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:52.667141 systemd[1]: Started iscsid.service. Feb 9 09:56:52.671375 systemd-networkd[739]: eth0: DHCPv4 address 10.0.0.81/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 09:56:52.673090 systemd[1]: Starting dracut-initqueue.service... Feb 9 09:56:52.682936 systemd[1]: Finished dracut-initqueue.service. Feb 9 09:56:52.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:52.683854 systemd[1]: Reached target remote-fs-pre.target. Feb 9 09:56:52.685210 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:56:52.686528 systemd[1]: Reached target remote-fs.target. Feb 9 09:56:52.688646 systemd[1]: Starting dracut-pre-mount.service... Feb 9 09:56:52.696087 systemd[1]: Finished dracut-pre-mount.service. Feb 9 09:56:52.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:52.699068 ignition[647]: parsing config with SHA512: 00d1a87ef91368ae471bbb6142a09c4ff575abf9160f5b16792e2bf0718340b3f61fefe10a0cb4976b32c6818fbf344e78d03f8b7f0570f1287fa0b852d41a66 Feb 9 09:56:52.720924 unknown[647]: fetched base config from "system" Feb 9 09:56:52.720935 unknown[647]: fetched user config from "qemu" Feb 9 09:56:52.721465 ignition[647]: fetch-offline: fetch-offline passed Feb 9 09:56:52.721528 ignition[647]: Ignition finished successfully Feb 9 09:56:52.723361 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 09:56:52.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:52.724496 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 09:56:52.725186 systemd[1]: Starting ignition-kargs.service... Feb 9 09:56:52.734164 ignition[760]: Ignition 2.14.0 Feb 9 09:56:52.734173 ignition[760]: Stage: kargs Feb 9 09:56:52.734266 ignition[760]: no configs at "/usr/lib/ignition/base.d" Feb 9 09:56:52.734276 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:56:52.735219 ignition[760]: kargs: kargs passed Feb 9 09:56:52.735263 ignition[760]: Ignition finished successfully Feb 9 09:56:52.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:52.738105 systemd[1]: Finished ignition-kargs.service. Feb 9 09:56:52.739862 systemd[1]: Starting ignition-disks.service... Feb 9 09:56:52.746396 ignition[766]: Ignition 2.14.0 Feb 9 09:56:52.746406 ignition[766]: Stage: disks Feb 9 09:56:52.746492 ignition[766]: no configs at "/usr/lib/ignition/base.d" Feb 9 09:56:52.748338 systemd[1]: Finished ignition-disks.service. Feb 9 09:56:52.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:52.746501 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:56:52.749660 systemd[1]: Reached target initrd-root-device.target. Feb 9 09:56:52.747303 ignition[766]: disks: disks passed Feb 9 09:56:52.750718 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:56:52.747385 ignition[766]: Ignition finished successfully Feb 9 09:56:52.752062 systemd[1]: Reached target local-fs.target. Feb 9 09:56:52.753147 systemd[1]: Reached target sysinit.target. Feb 9 09:56:52.754061 systemd[1]: Reached target basic.target. Feb 9 09:56:52.755931 systemd[1]: Starting systemd-fsck-root.service... Feb 9 09:56:52.766061 systemd-fsck[774]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 9 09:56:52.770179 systemd[1]: Finished systemd-fsck-root.service. Feb 9 09:56:52.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:52.771816 systemd[1]: Mounting sysroot.mount... Feb 9 09:56:52.778282 systemd[1]: Mounted sysroot.mount. Feb 9 09:56:52.779386 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 09:56:52.778931 systemd[1]: Reached target initrd-root-fs.target. Feb 9 09:56:52.780867 systemd[1]: Mounting sysroot-usr.mount... Feb 9 09:56:52.781560 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 09:56:52.781605 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 09:56:52.781629 systemd[1]: Reached target ignition-diskful.target. Feb 9 09:56:52.783597 systemd[1]: Mounted sysroot-usr.mount. Feb 9 09:56:52.784859 systemd[1]: Starting initrd-setup-root.service... Feb 9 09:56:52.789039 initrd-setup-root[784]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 09:56:52.793172 initrd-setup-root[792]: cut: /sysroot/etc/group: No such file or directory Feb 9 09:56:52.797070 initrd-setup-root[800]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 09:56:52.800658 initrd-setup-root[808]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 09:56:52.826143 systemd[1]: Finished initrd-setup-root.service. Feb 9 09:56:52.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:52.827612 systemd[1]: Starting ignition-mount.service... Feb 9 09:56:52.828758 systemd[1]: Starting sysroot-boot.service... Feb 9 09:56:52.832963 bash[826]: umount: /sysroot/usr/share/oem: not mounted. Feb 9 09:56:52.841708 ignition[828]: INFO : Ignition 2.14.0 Feb 9 09:56:52.842422 ignition[828]: INFO : Stage: mount Feb 9 09:56:52.842869 ignition[828]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 09:56:52.842869 ignition[828]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:56:52.844245 ignition[828]: INFO : mount: mount passed Feb 9 09:56:52.844245 ignition[828]: INFO : Ignition finished successfully Feb 9 09:56:52.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:52.844476 systemd[1]: Finished ignition-mount.service. Feb 9 09:56:52.847437 systemd[1]: Finished sysroot-boot.service. Feb 9 09:56:52.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:53.503696 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 09:56:53.509348 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (836) Feb 9 09:56:53.510718 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:56:53.510730 kernel: BTRFS info (device vda6): using free space tree Feb 9 09:56:53.510743 kernel: BTRFS info (device vda6): has skinny extents Feb 9 09:56:53.513820 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 09:56:53.515268 systemd[1]: Starting ignition-files.service... Feb 9 09:56:53.528918 ignition[856]: INFO : Ignition 2.14.0 Feb 9 09:56:53.528918 ignition[856]: INFO : Stage: files Feb 9 09:56:53.530065 ignition[856]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 09:56:53.530065 ignition[856]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:56:53.530065 ignition[856]: DEBUG : files: compiled without relabeling support, skipping Feb 9 09:56:53.534425 ignition[856]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 09:56:53.534425 ignition[856]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 09:56:53.536540 ignition[856]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 09:56:53.536540 ignition[856]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 09:56:53.538546 ignition[856]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 09:56:53.538546 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 09:56:53.538546 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 09:56:53.538546 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 09:56:53.538546 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Feb 9 09:56:53.536873 unknown[856]: wrote ssh authorized keys file for user: core Feb 9 09:56:53.879554 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 09:56:54.070686 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Feb 9 09:56:54.072724 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 09:56:54.072724 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 09:56:54.072724 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Feb 9 09:56:54.278060 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 09:56:54.348519 systemd-networkd[739]: eth0: Gained IPv6LL Feb 9 09:56:54.458577 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Feb 9 09:56:54.460683 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 09:56:54.460683 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:56:54.460683 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Feb 9 09:56:54.506385 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 09:56:54.900866 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Feb 9 09:56:54.903060 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:56:54.903060 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:56:54.903060 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Feb 9 09:56:54.927524 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 09:56:55.587060 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Feb 9 09:56:55.589485 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:56:55.589485 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/install.sh" Feb 9 09:56:55.589485 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 09:56:55.589485 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:56:55.589485 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:56:55.589485 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:56:55.589485 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:56:55.589485 ignition[856]: INFO : files: op(b): [started] processing unit "containerd.service" Feb 9 09:56:55.589485 ignition[856]: INFO : files: op(b): op(c): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 09:56:55.589485 ignition[856]: INFO : files: op(b): op(c): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 09:56:55.589485 ignition[856]: INFO : files: op(b): [finished] processing unit "containerd.service" Feb 9 09:56:55.589485 ignition[856]: INFO : files: op(d): [started] processing unit "prepare-cni-plugins.service" Feb 9 09:56:55.589485 ignition[856]: INFO : files: op(d): op(e): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:56:55.589485 ignition[856]: INFO : files: op(d): op(e): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:56:55.589485 ignition[856]: INFO : files: op(d): [finished] processing unit "prepare-cni-plugins.service" Feb 9 09:56:55.589485 ignition[856]: INFO : files: op(f): [started] processing unit "prepare-critools.service" Feb 9 09:56:55.589485 ignition[856]: INFO : files: op(f): op(10): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:56:55.612731 ignition[856]: INFO : files: op(f): op(10): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:56:55.612731 ignition[856]: INFO : files: op(f): [finished] processing unit "prepare-critools.service" Feb 9 09:56:55.612731 ignition[856]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Feb 9 09:56:55.612731 ignition[856]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 09:56:55.612731 ignition[856]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 09:56:55.612731 ignition[856]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Feb 9 09:56:55.612731 ignition[856]: INFO : files: op(13): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:56:55.612731 ignition[856]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:56:55.612731 ignition[856]: INFO : files: op(14): [started] setting preset to enabled for "prepare-critools.service" Feb 9 09:56:55.612731 ignition[856]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 09:56:55.612731 ignition[856]: INFO : files: op(15): [started] setting preset to disabled for "coreos-metadata.service" Feb 9 09:56:55.612731 ignition[856]: INFO : files: op(15): op(16): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 09:56:55.641036 ignition[856]: INFO : files: op(15): op(16): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 09:56:55.642160 ignition[856]: INFO : files: op(15): [finished] setting preset to disabled for "coreos-metadata.service" Feb 9 09:56:55.642160 ignition[856]: INFO : files: createResultFile: createFiles: op(17): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:56:55.642160 ignition[856]: INFO : files: createResultFile: createFiles: op(17): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:56:55.642160 ignition[856]: INFO : files: files passed Feb 9 09:56:55.642160 ignition[856]: INFO : Ignition finished successfully Feb 9 09:56:55.651742 kernel: kauditd_printk_skb: 21 callbacks suppressed Feb 9 09:56:55.651770 kernel: audit: type=1130 audit(1707472615.643:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.642674 systemd[1]: Finished ignition-files.service. Feb 9 09:56:55.645074 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 09:56:55.653641 initrd-setup-root-after-ignition[880]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 9 09:56:55.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.648185 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 09:56:55.662230 kernel: audit: type=1130 audit(1707472615.654:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.662254 kernel: audit: type=1130 audit(1707472615.657:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.662264 kernel: audit: type=1131 audit(1707472615.657:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.662375 initrd-setup-root-after-ignition[883]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 09:56:55.648965 systemd[1]: Starting ignition-quench.service... Feb 9 09:56:55.652730 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 09:56:55.654675 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 09:56:55.654754 systemd[1]: Finished ignition-quench.service. Feb 9 09:56:55.657847 systemd[1]: Reached target ignition-complete.target. Feb 9 09:56:55.663667 systemd[1]: Starting initrd-parse-etc.service... Feb 9 09:56:55.676447 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 09:56:55.676546 systemd[1]: Finished initrd-parse-etc.service. Feb 9 09:56:55.677899 systemd[1]: Reached target initrd-fs.target. Feb 9 09:56:55.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.680699 systemd[1]: Reached target initrd.target. Feb 9 09:56:55.683932 kernel: audit: type=1130 audit(1707472615.677:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.683953 kernel: audit: type=1131 audit(1707472615.677:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.681859 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 09:56:55.682636 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 09:56:55.692799 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 09:56:55.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.694439 systemd[1]: Starting initrd-cleanup.service... Feb 9 09:56:55.697641 kernel: audit: type=1130 audit(1707472615.693:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.703924 systemd[1]: Stopped target nss-lookup.target. Feb 9 09:56:55.704846 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 09:56:55.706225 systemd[1]: Stopped target timers.target. Feb 9 09:56:55.707432 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 09:56:55.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.707554 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 09:56:55.711670 kernel: audit: type=1131 audit(1707472615.708:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.708771 systemd[1]: Stopped target initrd.target. Feb 9 09:56:55.712311 systemd[1]: Stopped target basic.target. Feb 9 09:56:55.713459 systemd[1]: Stopped target ignition-complete.target. Feb 9 09:56:55.714736 systemd[1]: Stopped target ignition-diskful.target. Feb 9 09:56:55.715962 systemd[1]: Stopped target initrd-root-device.target. Feb 9 09:56:55.717186 systemd[1]: Stopped target remote-fs.target. Feb 9 09:56:55.718435 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 09:56:55.719746 systemd[1]: Stopped target sysinit.target. Feb 9 09:56:55.721028 systemd[1]: Stopped target local-fs.target. Feb 9 09:56:55.722235 systemd[1]: Stopped target local-fs-pre.target. Feb 9 09:56:55.723507 systemd[1]: Stopped target swap.target. Feb 9 09:56:55.724571 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 09:56:55.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.724700 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 09:56:55.728907 kernel: audit: type=1131 audit(1707472615.725:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.728204 systemd[1]: Stopped target cryptsetup.target. Feb 9 09:56:55.729596 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 09:56:55.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.729704 systemd[1]: Stopped dracut-initqueue.service. Feb 9 09:56:55.734495 kernel: audit: type=1131 audit(1707472615.730:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.730873 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 09:56:55.730968 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 09:56:55.734047 systemd[1]: Stopped target paths.target. Feb 9 09:56:55.734983 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 09:56:55.740380 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 09:56:55.741225 systemd[1]: Stopped target slices.target. Feb 9 09:56:55.742452 systemd[1]: Stopped target sockets.target. Feb 9 09:56:55.743512 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 09:56:55.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.743645 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 09:56:55.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.744856 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 09:56:55.744955 systemd[1]: Stopped ignition-files.service. Feb 9 09:56:55.749095 iscsid[746]: iscsid shutting down. Feb 9 09:56:55.747127 systemd[1]: Stopping ignition-mount.service... Feb 9 09:56:55.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.748634 systemd[1]: Stopping iscsid.service... Feb 9 09:56:55.749511 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 09:56:55.749640 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 09:56:55.751555 systemd[1]: Stopping sysroot-boot.service... Feb 9 09:56:55.754658 ignition[896]: INFO : Ignition 2.14.0 Feb 9 09:56:55.754658 ignition[896]: INFO : Stage: umount Feb 9 09:56:55.754658 ignition[896]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 09:56:55.754658 ignition[896]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:56:55.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.758830 ignition[896]: INFO : umount: umount passed Feb 9 09:56:55.758830 ignition[896]: INFO : Ignition finished successfully Feb 9 09:56:55.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.756439 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 09:56:55.756608 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 09:56:55.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.757977 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 09:56:55.758065 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 09:56:55.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.760740 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 09:56:55.760838 systemd[1]: Stopped iscsid.service. Feb 9 09:56:55.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.763390 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 09:56:55.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.763928 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 09:56:55.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.764008 systemd[1]: Stopped ignition-mount.service. Feb 9 09:56:55.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.764963 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 09:56:55.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.765031 systemd[1]: Closed iscsid.socket. Feb 9 09:56:55.765800 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 09:56:55.765845 systemd[1]: Stopped ignition-disks.service. Feb 9 09:56:55.766940 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 09:56:55.766977 systemd[1]: Stopped ignition-kargs.service. Feb 9 09:56:55.768047 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 09:56:55.768082 systemd[1]: Stopped ignition-setup.service. Feb 9 09:56:55.769240 systemd[1]: Stopping iscsiuio.service... Feb 9 09:56:55.772017 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 09:56:55.772104 systemd[1]: Stopped iscsiuio.service. Feb 9 09:56:55.773016 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 09:56:55.773093 systemd[1]: Finished initrd-cleanup.service. Feb 9 09:56:55.774643 systemd[1]: Stopped target network.target. Feb 9 09:56:55.775649 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 09:56:55.775680 systemd[1]: Closed iscsiuio.socket. Feb 9 09:56:55.776885 systemd[1]: Stopping systemd-networkd.service... Feb 9 09:56:55.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.777514 systemd[1]: Stopping systemd-resolved.service... Feb 9 09:56:55.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.783375 systemd-networkd[739]: eth0: DHCPv6 lease lost Feb 9 09:56:55.791000 audit: BPF prog-id=9 op=UNLOAD Feb 9 09:56:55.785058 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 09:56:55.785150 systemd[1]: Stopped systemd-networkd.service. Feb 9 09:56:55.789228 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 09:56:55.789341 systemd[1]: Stopped systemd-resolved.service. Feb 9 09:56:55.791498 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 09:56:55.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.791529 systemd[1]: Closed systemd-networkd.socket. Feb 9 09:56:55.798000 audit: BPF prog-id=6 op=UNLOAD Feb 9 09:56:55.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.793900 systemd[1]: Stopping network-cleanup.service... Feb 9 09:56:55.795173 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 09:56:55.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.795233 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 09:56:55.796407 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 09:56:55.796444 systemd[1]: Stopped systemd-sysctl.service. Feb 9 09:56:55.799039 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 09:56:55.799089 systemd[1]: Stopped systemd-modules-load.service. Feb 9 09:56:55.800732 systemd[1]: Stopping systemd-udevd.service... Feb 9 09:56:55.805020 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 09:56:55.808815 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 09:56:55.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.808922 systemd[1]: Stopped network-cleanup.service. Feb 9 09:56:55.811805 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 09:56:55.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.811925 systemd[1]: Stopped systemd-udevd.service. Feb 9 09:56:55.813163 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 09:56:55.813201 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 09:56:55.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.814209 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 09:56:55.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.814242 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 09:56:55.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.815270 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 09:56:55.815319 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 09:56:55.816717 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 09:56:55.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.816761 systemd[1]: Stopped dracut-cmdline.service. Feb 9 09:56:55.817807 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 09:56:55.817850 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 09:56:55.820039 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 09:56:55.821077 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 09:56:55.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.821140 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 09:56:55.825442 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 09:56:55.825531 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 09:56:55.832555 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 09:56:55.832656 systemd[1]: Stopped sysroot-boot.service. Feb 9 09:56:55.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.833834 systemd[1]: Reached target initrd-switch-root.target. Feb 9 09:56:55.834883 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 09:56:55.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:55.834943 systemd[1]: Stopped initrd-setup-root.service. Feb 9 09:56:55.836762 systemd[1]: Starting initrd-switch-root.service... Feb 9 09:56:55.842909 systemd[1]: Switching root. Feb 9 09:56:55.843000 audit: BPF prog-id=8 op=UNLOAD Feb 9 09:56:55.843000 audit: BPF prog-id=7 op=UNLOAD Feb 9 09:56:55.847000 audit: BPF prog-id=5 op=UNLOAD Feb 9 09:56:55.847000 audit: BPF prog-id=4 op=UNLOAD Feb 9 09:56:55.847000 audit: BPF prog-id=3 op=UNLOAD Feb 9 09:56:55.865656 systemd-journald[289]: Journal stopped Feb 9 09:56:57.942127 systemd-journald[289]: Received SIGTERM from PID 1 (systemd). Feb 9 09:56:57.942184 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 09:56:57.942196 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 09:56:57.942207 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 09:56:57.942219 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 09:56:57.942233 kernel: SELinux: policy capability open_perms=1 Feb 9 09:56:57.942243 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 09:56:57.942252 kernel: SELinux: policy capability always_check_network=0 Feb 9 09:56:57.942262 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 09:56:57.942272 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 09:56:57.942282 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 09:56:57.942292 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 09:56:57.942304 systemd[1]: Successfully loaded SELinux policy in 31.733ms. Feb 9 09:56:57.942344 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.698ms. Feb 9 09:56:57.942357 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:56:57.942368 systemd[1]: Detected virtualization kvm. Feb 9 09:56:57.942379 systemd[1]: Detected architecture arm64. Feb 9 09:56:57.942389 systemd[1]: Detected first boot. Feb 9 09:56:57.942400 systemd[1]: Initializing machine ID from VM UUID. Feb 9 09:56:57.942410 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 09:56:57.942422 systemd[1]: Populated /etc with preset unit settings. Feb 9 09:56:57.942433 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:56:57.942445 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:56:57.942456 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:56:57.942468 systemd[1]: Queued start job for default target multi-user.target. Feb 9 09:56:57.942479 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 9 09:56:57.942489 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 09:56:57.942501 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 09:56:57.942512 systemd[1]: Created slice system-getty.slice. Feb 9 09:56:57.942522 systemd[1]: Created slice system-modprobe.slice. Feb 9 09:56:57.942533 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 09:56:57.942544 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 09:56:57.942561 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 09:56:57.942574 systemd[1]: Created slice user.slice. Feb 9 09:56:57.942586 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:56:57.942596 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 09:56:57.942608 systemd[1]: Set up automount boot.automount. Feb 9 09:56:57.942619 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 09:56:57.942629 systemd[1]: Reached target integritysetup.target. Feb 9 09:56:57.942639 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:56:57.942651 systemd[1]: Reached target remote-fs.target. Feb 9 09:56:57.942661 systemd[1]: Reached target slices.target. Feb 9 09:56:57.942671 systemd[1]: Reached target swap.target. Feb 9 09:56:57.942682 systemd[1]: Reached target torcx.target. Feb 9 09:56:57.942697 systemd[1]: Reached target veritysetup.target. Feb 9 09:56:57.942708 systemd[1]: Listening on systemd-coredump.socket. Feb 9 09:56:57.942718 systemd[1]: Listening on systemd-initctl.socket. Feb 9 09:56:57.942729 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 09:56:57.942739 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 09:56:57.942750 systemd[1]: Listening on systemd-journald.socket. Feb 9 09:56:57.942761 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:56:57.942772 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:56:57.942784 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:56:57.942795 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 09:56:57.942808 systemd[1]: Mounting dev-hugepages.mount... Feb 9 09:56:57.942823 systemd[1]: Mounting dev-mqueue.mount... Feb 9 09:56:57.942834 systemd[1]: Mounting media.mount... Feb 9 09:56:57.942844 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 09:56:57.942854 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 09:56:57.942864 systemd[1]: Mounting tmp.mount... Feb 9 09:56:57.942875 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 09:56:57.942887 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 09:56:57.942902 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:56:57.942915 systemd[1]: Starting modprobe@configfs.service... Feb 9 09:56:57.942925 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 09:56:57.942936 systemd[1]: Starting modprobe@drm.service... Feb 9 09:56:57.942956 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 09:56:57.942967 systemd[1]: Starting modprobe@fuse.service... Feb 9 09:56:57.942978 systemd[1]: Starting modprobe@loop.service... Feb 9 09:56:57.942989 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 09:56:57.943000 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 9 09:56:57.943013 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 9 09:56:57.943024 systemd[1]: Starting systemd-journald.service... Feb 9 09:56:57.943034 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:56:57.943046 systemd[1]: Starting systemd-network-generator.service... Feb 9 09:56:57.943058 systemd[1]: Starting systemd-remount-fs.service... Feb 9 09:56:57.943069 kernel: fuse: init (API version 7.34) Feb 9 09:56:57.943080 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:56:57.943090 systemd[1]: Mounted dev-hugepages.mount. Feb 9 09:56:57.943101 systemd[1]: Mounted dev-mqueue.mount. Feb 9 09:56:57.943111 systemd[1]: Mounted media.mount. Feb 9 09:56:57.943124 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 09:56:57.943134 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 09:56:57.943144 systemd[1]: Mounted tmp.mount. Feb 9 09:56:57.943155 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:56:57.943165 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 09:56:57.943176 systemd[1]: Finished modprobe@configfs.service. Feb 9 09:56:57.943186 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 09:56:57.943197 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 09:56:57.943207 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 09:56:57.943219 systemd[1]: Finished modprobe@drm.service. Feb 9 09:56:57.943229 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 09:56:57.943240 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 09:56:57.943250 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 09:56:57.943261 systemd[1]: Finished modprobe@fuse.service. Feb 9 09:56:57.943272 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:56:57.943284 systemd[1]: Finished systemd-network-generator.service. Feb 9 09:56:57.943295 kernel: loop: module loaded Feb 9 09:56:57.943310 systemd-journald[1025]: Journal started Feb 9 09:56:57.943365 systemd-journald[1025]: Runtime Journal (/run/log/journal/8ba6db643a824b86816804ac1c30e5c1) is 6.0M, max 48.7M, 42.6M free. Feb 9 09:56:57.850000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 09:56:57.850000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 09:56:57.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:57.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:57.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:57.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:57.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:57.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:57.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:57.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:57.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:57.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:57.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:57.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:57.940000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 09:56:57.940000 audit[1025]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffcdb82b90 a2=4000 a3=1 items=0 ppid=1 pid=1025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:56:57.940000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 09:56:57.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:57.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:57.948374 systemd[1]: Finished systemd-remount-fs.service. Feb 9 09:56:57.948403 systemd[1]: Started systemd-journald.service. Feb 9 09:56:57.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:57.950376 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 09:56:57.950626 systemd[1]: Finished modprobe@loop.service. Feb 9 09:56:57.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:57.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:57.951843 systemd[1]: Reached target network-pre.target. Feb 9 09:56:57.953578 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 09:56:57.955364 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 09:56:57.955921 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 09:56:57.957373 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 09:56:57.960874 systemd[1]: Starting systemd-journal-flush.service... Feb 9 09:56:57.961638 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 09:56:57.962659 systemd[1]: Starting systemd-random-seed.service... Feb 9 09:56:57.963349 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 09:56:57.965147 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:56:57.967902 systemd-journald[1025]: Time spent on flushing to /var/log/journal/8ba6db643a824b86816804ac1c30e5c1 is 14.101ms for 938 entries. Feb 9 09:56:57.967902 systemd-journald[1025]: System Journal (/var/log/journal/8ba6db643a824b86816804ac1c30e5c1) is 8.0M, max 195.6M, 187.6M free. Feb 9 09:56:57.997357 systemd-journald[1025]: Received client request to flush runtime journal. Feb 9 09:56:57.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:57.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:57.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:57.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:57.968022 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 09:56:57.969547 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 09:56:57.972593 systemd[1]: Finished systemd-random-seed.service. Feb 9 09:56:57.973675 systemd[1]: Reached target first-boot-complete.target. Feb 9 09:56:57.980220 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 09:56:57.981166 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:56:57.982909 systemd[1]: Starting systemd-sysusers.service... Feb 9 09:56:57.994438 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:56:57.996532 systemd[1]: Starting systemd-udev-settle.service... Feb 9 09:56:57.998144 systemd[1]: Finished systemd-journal-flush.service. Feb 9 09:56:57.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:58.002996 udevadm[1085]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 09:56:58.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:58.008697 systemd[1]: Finished systemd-sysusers.service. Feb 9 09:56:58.010599 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 09:56:58.026397 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 09:56:58.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:58.338105 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 09:56:58.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:58.340065 systemd[1]: Starting systemd-udevd.service... Feb 9 09:56:58.357192 systemd-udevd[1092]: Using default interface naming scheme 'v252'. Feb 9 09:56:58.377120 systemd[1]: Started systemd-udevd.service. Feb 9 09:56:58.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:58.380237 systemd[1]: Starting systemd-networkd.service... Feb 9 09:56:58.395941 systemd[1]: Starting systemd-userdbd.service... Feb 9 09:56:58.399398 systemd[1]: Found device dev-ttyAMA0.device. Feb 9 09:56:58.441883 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 09:56:58.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:58.460396 systemd[1]: Started systemd-userdbd.service. Feb 9 09:56:58.471707 systemd[1]: Finished systemd-udev-settle.service. Feb 9 09:56:58.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:58.473550 systemd[1]: Starting lvm2-activation-early.service... Feb 9 09:56:58.495520 lvm[1125]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:56:58.507794 systemd-networkd[1100]: lo: Link UP Feb 9 09:56:58.507804 systemd-networkd[1100]: lo: Gained carrier Feb 9 09:56:58.508132 systemd-networkd[1100]: Enumeration completed Feb 9 09:56:58.508235 systemd-networkd[1100]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:56:58.508255 systemd[1]: Started systemd-networkd.service. Feb 9 09:56:58.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:58.511142 systemd-networkd[1100]: eth0: Link UP Feb 9 09:56:58.511154 systemd-networkd[1100]: eth0: Gained carrier Feb 9 09:56:58.525246 systemd[1]: Finished lvm2-activation-early.service. Feb 9 09:56:58.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:58.526065 systemd[1]: Reached target cryptsetup.target. Feb 9 09:56:58.527813 systemd[1]: Starting lvm2-activation.service... Feb 9 09:56:58.531543 lvm[1128]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:56:58.534587 systemd-networkd[1100]: eth0: DHCPv4 address 10.0.0.81/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 09:56:58.563238 systemd[1]: Finished lvm2-activation.service. Feb 9 09:56:58.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:58.564020 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:56:58.564668 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 09:56:58.564695 systemd[1]: Reached target local-fs.target. Feb 9 09:56:58.565235 systemd[1]: Reached target machines.target. Feb 9 09:56:58.567024 systemd[1]: Starting ldconfig.service... Feb 9 09:56:58.567875 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 09:56:58.567930 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:56:58.569040 systemd[1]: Starting systemd-boot-update.service... Feb 9 09:56:58.570706 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 09:56:58.573277 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 09:56:58.574050 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:56:58.574097 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:56:58.575174 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 09:56:58.578998 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1131 (bootctl) Feb 9 09:56:58.579960 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 09:56:58.581092 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 09:56:58.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:58.591527 systemd-tmpfiles[1134]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 09:56:58.596525 systemd-tmpfiles[1134]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 09:56:58.602351 systemd-tmpfiles[1134]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 09:56:58.657923 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 09:56:58.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:58.673945 systemd-fsck[1140]: fsck.fat 4.2 (2021-01-31) Feb 9 09:56:58.673945 systemd-fsck[1140]: /dev/vda1: 236 files, 113719/258078 clusters Feb 9 09:56:58.676858 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 09:56:58.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:58.743363 ldconfig[1130]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 09:56:58.747387 systemd[1]: Finished ldconfig.service. Feb 9 09:56:58.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:58.913765 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 09:56:58.915272 systemd[1]: Mounting boot.mount... Feb 9 09:56:58.921898 systemd[1]: Mounted boot.mount. Feb 9 09:56:58.929484 systemd[1]: Finished systemd-boot-update.service. Feb 9 09:56:58.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:58.983359 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 09:56:58.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:58.985391 systemd[1]: Starting audit-rules.service... Feb 9 09:56:58.987011 systemd[1]: Starting clean-ca-certificates.service... Feb 9 09:56:58.988799 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 09:56:58.991093 systemd[1]: Starting systemd-resolved.service... Feb 9 09:56:58.993458 systemd[1]: Starting systemd-timesyncd.service... Feb 9 09:56:58.995541 systemd[1]: Starting systemd-update-utmp.service... Feb 9 09:56:58.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:58.997317 systemd[1]: Finished clean-ca-certificates.service. Feb 9 09:56:58.998654 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 09:56:59.007000 audit[1161]: SYSTEM_BOOT pid=1161 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 09:56:59.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:59.011236 systemd[1]: Finished systemd-update-utmp.service. Feb 9 09:56:59.012777 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 09:56:59.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:59.014984 systemd[1]: Starting systemd-update-done.service... Feb 9 09:56:59.021287 systemd[1]: Finished systemd-update-done.service. Feb 9 09:56:59.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:59.033000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 09:56:59.033000 audit[1174]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe9bcb9a0 a2=420 a3=0 items=0 ppid=1149 pid=1174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:56:59.033000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 09:56:59.034235 augenrules[1174]: No rules Feb 9 09:56:59.034865 systemd[1]: Finished audit-rules.service. Feb 9 09:56:59.064058 systemd[1]: Started systemd-timesyncd.service. Feb 9 09:56:59.064882 systemd-timesyncd[1158]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 9 09:56:59.065148 systemd[1]: Reached target time-set.target. Feb 9 09:56:59.065294 systemd-timesyncd[1158]: Initial clock synchronization to Fri 2024-02-09 09:56:58.899497 UTC. Feb 9 09:56:59.067354 systemd-resolved[1154]: Positive Trust Anchors: Feb 9 09:56:59.067366 systemd-resolved[1154]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:56:59.067392 systemd-resolved[1154]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:56:59.074958 systemd-resolved[1154]: Defaulting to hostname 'linux'. Feb 9 09:56:59.076314 systemd[1]: Started systemd-resolved.service. Feb 9 09:56:59.077139 systemd[1]: Reached target network.target. Feb 9 09:56:59.077797 systemd[1]: Reached target nss-lookup.target. Feb 9 09:56:59.078544 systemd[1]: Reached target sysinit.target. Feb 9 09:56:59.079293 systemd[1]: Started motdgen.path. Feb 9 09:56:59.079947 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 09:56:59.080993 systemd[1]: Started logrotate.timer. Feb 9 09:56:59.081719 systemd[1]: Started mdadm.timer. Feb 9 09:56:59.082339 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 09:56:59.083068 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 09:56:59.083100 systemd[1]: Reached target paths.target. Feb 9 09:56:59.083757 systemd[1]: Reached target timers.target. Feb 9 09:56:59.084710 systemd[1]: Listening on dbus.socket. Feb 9 09:56:59.086505 systemd[1]: Starting docker.socket... Feb 9 09:56:59.088001 systemd[1]: Listening on sshd.socket. Feb 9 09:56:59.088777 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:56:59.089091 systemd[1]: Listening on docker.socket. Feb 9 09:56:59.089864 systemd[1]: Reached target sockets.target. Feb 9 09:56:59.090548 systemd[1]: Reached target basic.target. Feb 9 09:56:59.091360 systemd[1]: System is tainted: cgroupsv1 Feb 9 09:56:59.091408 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:56:59.091430 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:56:59.092462 systemd[1]: Starting containerd.service... Feb 9 09:56:59.094175 systemd[1]: Starting dbus.service... Feb 9 09:56:59.095758 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 09:56:59.097566 systemd[1]: Starting extend-filesystems.service... Feb 9 09:56:59.098358 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 09:56:59.099472 systemd[1]: Starting motdgen.service... Feb 9 09:56:59.101181 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 09:56:59.103282 systemd[1]: Starting prepare-critools.service... Feb 9 09:56:59.104495 jq[1186]: false Feb 9 09:56:59.105631 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 09:56:59.107435 systemd[1]: Starting sshd-keygen.service... Feb 9 09:56:59.109934 systemd[1]: Starting systemd-logind.service... Feb 9 09:56:59.113404 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:56:59.113467 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 09:56:59.115274 systemd[1]: Starting update-engine.service... Feb 9 09:56:59.116915 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 09:56:59.119136 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 09:56:59.122385 jq[1207]: true Feb 9 09:56:59.121519 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 09:56:59.123384 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 09:56:59.123604 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 09:56:59.137183 tar[1211]: ./ Feb 9 09:56:59.137183 tar[1211]: ./macvlan Feb 9 09:56:59.137842 jq[1215]: true Feb 9 09:56:59.140285 tar[1214]: crictl Feb 9 09:56:59.140535 extend-filesystems[1187]: Found vda Feb 9 09:56:59.140535 extend-filesystems[1187]: Found vda1 Feb 9 09:56:59.140535 extend-filesystems[1187]: Found vda2 Feb 9 09:56:59.140535 extend-filesystems[1187]: Found vda3 Feb 9 09:56:59.140535 extend-filesystems[1187]: Found usr Feb 9 09:56:59.140535 extend-filesystems[1187]: Found vda4 Feb 9 09:56:59.140535 extend-filesystems[1187]: Found vda6 Feb 9 09:56:59.140535 extend-filesystems[1187]: Found vda7 Feb 9 09:56:59.148569 extend-filesystems[1187]: Found vda9 Feb 9 09:56:59.148569 extend-filesystems[1187]: Checking size of /dev/vda9 Feb 9 09:56:59.148076 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 09:56:59.148307 systemd[1]: Finished motdgen.service. Feb 9 09:56:59.170814 extend-filesystems[1187]: Resized partition /dev/vda9 Feb 9 09:56:59.176194 dbus-daemon[1185]: [system] SELinux support is enabled Feb 9 09:56:59.176540 systemd[1]: Started dbus.service. Feb 9 09:56:59.177008 extend-filesystems[1246]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 09:56:59.179216 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 09:56:59.179278 systemd[1]: Reached target system-config.target. Feb 9 09:56:59.180098 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 09:56:59.180114 systemd[1]: Reached target user-config.target. Feb 9 09:56:59.188341 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 9 09:56:59.207684 systemd-logind[1200]: Watching system buttons on /dev/input/event0 (Power Button) Feb 9 09:56:59.208406 systemd-logind[1200]: New seat seat0. Feb 9 09:56:59.213799 systemd[1]: Started systemd-logind.service. Feb 9 09:56:59.217342 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 9 09:56:59.234895 update_engine[1203]: I0209 09:56:59.213045 1203 main.cc:92] Flatcar Update Engine starting Feb 9 09:56:59.234895 update_engine[1203]: I0209 09:56:59.224610 1203 update_check_scheduler.cc:74] Next update check in 3m4s Feb 9 09:56:59.220981 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 09:56:59.235249 bash[1245]: Updated "/home/core/.ssh/authorized_keys" Feb 9 09:56:59.224561 systemd[1]: Started update-engine.service. Feb 9 09:56:59.226864 systemd[1]: Started locksmithd.service. Feb 9 09:56:59.236527 extend-filesystems[1246]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 9 09:56:59.236527 extend-filesystems[1246]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 09:56:59.236527 extend-filesystems[1246]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 9 09:56:59.243377 extend-filesystems[1187]: Resized filesystem in /dev/vda9 Feb 9 09:56:59.237212 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 09:56:59.237494 systemd[1]: Finished extend-filesystems.service. Feb 9 09:56:59.252136 tar[1211]: ./static Feb 9 09:56:59.265543 env[1216]: time="2024-02-09T09:56:59.265491240Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 09:56:59.275997 tar[1211]: ./vlan Feb 9 09:56:59.308671 tar[1211]: ./portmap Feb 9 09:56:59.335503 env[1216]: time="2024-02-09T09:56:59.335049280Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 09:56:59.335503 env[1216]: time="2024-02-09T09:56:59.335202560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:56:59.336544 env[1216]: time="2024-02-09T09:56:59.336497280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:56:59.336544 env[1216]: time="2024-02-09T09:56:59.336534400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:56:59.338488 env[1216]: time="2024-02-09T09:56:59.336781160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:56:59.338488 env[1216]: time="2024-02-09T09:56:59.336815840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 09:56:59.338488 env[1216]: time="2024-02-09T09:56:59.336830400Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 09:56:59.338488 env[1216]: time="2024-02-09T09:56:59.336840760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 09:56:59.338488 env[1216]: time="2024-02-09T09:56:59.336911920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:56:59.338488 env[1216]: time="2024-02-09T09:56:59.337228720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:56:59.338488 env[1216]: time="2024-02-09T09:56:59.337383240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:56:59.338488 env[1216]: time="2024-02-09T09:56:59.337400200Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 09:56:59.338488 env[1216]: time="2024-02-09T09:56:59.337452240Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 09:56:59.338488 env[1216]: time="2024-02-09T09:56:59.337464400Z" level=info msg="metadata content store policy set" policy=shared Feb 9 09:56:59.338708 tar[1211]: ./host-local Feb 9 09:56:59.342067 env[1216]: time="2024-02-09T09:56:59.342020600Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 09:56:59.342067 env[1216]: time="2024-02-09T09:56:59.342062520Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 09:56:59.342182 env[1216]: time="2024-02-09T09:56:59.342075680Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 09:56:59.342182 env[1216]: time="2024-02-09T09:56:59.342105800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 09:56:59.342182 env[1216]: time="2024-02-09T09:56:59.342119520Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 09:56:59.342182 env[1216]: time="2024-02-09T09:56:59.342134200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 09:56:59.342182 env[1216]: time="2024-02-09T09:56:59.342147200Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 09:56:59.342514 env[1216]: time="2024-02-09T09:56:59.342492200Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 09:56:59.342556 env[1216]: time="2024-02-09T09:56:59.342525760Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 09:56:59.342556 env[1216]: time="2024-02-09T09:56:59.342540120Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 09:56:59.342634 env[1216]: time="2024-02-09T09:56:59.342560560Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 09:56:59.342634 env[1216]: time="2024-02-09T09:56:59.342579000Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 09:56:59.342780 env[1216]: time="2024-02-09T09:56:59.342725560Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 09:56:59.342840 env[1216]: time="2024-02-09T09:56:59.342820480Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 09:56:59.345507 env[1216]: time="2024-02-09T09:56:59.343198560Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 09:56:59.345507 env[1216]: time="2024-02-09T09:56:59.343232560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 09:56:59.345507 env[1216]: time="2024-02-09T09:56:59.343247040Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 09:56:59.345507 env[1216]: time="2024-02-09T09:56:59.343369560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 09:56:59.345507 env[1216]: time="2024-02-09T09:56:59.343384240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 09:56:59.345507 env[1216]: time="2024-02-09T09:56:59.343396120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 09:56:59.345507 env[1216]: time="2024-02-09T09:56:59.343406600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 09:56:59.345507 env[1216]: time="2024-02-09T09:56:59.343418400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 09:56:59.345507 env[1216]: time="2024-02-09T09:56:59.343429840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 09:56:59.345507 env[1216]: time="2024-02-09T09:56:59.343440520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 09:56:59.345507 env[1216]: time="2024-02-09T09:56:59.343451600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 09:56:59.345507 env[1216]: time="2024-02-09T09:56:59.343465000Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 09:56:59.345507 env[1216]: time="2024-02-09T09:56:59.343593520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 09:56:59.345507 env[1216]: time="2024-02-09T09:56:59.343610920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 09:56:59.345507 env[1216]: time="2024-02-09T09:56:59.343622880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 09:56:59.345031 systemd[1]: Started containerd.service. Feb 9 09:56:59.345901 env[1216]: time="2024-02-09T09:56:59.343641120Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 09:56:59.345901 env[1216]: time="2024-02-09T09:56:59.343657160Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 09:56:59.345901 env[1216]: time="2024-02-09T09:56:59.343667600Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 09:56:59.345901 env[1216]: time="2024-02-09T09:56:59.343683440Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 09:56:59.345901 env[1216]: time="2024-02-09T09:56:59.343714880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 09:56:59.346003 env[1216]: time="2024-02-09T09:56:59.343905200Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 09:56:59.346003 env[1216]: time="2024-02-09T09:56:59.343956040Z" level=info msg="Connect containerd service" Feb 9 09:56:59.346003 env[1216]: time="2024-02-09T09:56:59.343987600Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 09:56:59.346003 env[1216]: time="2024-02-09T09:56:59.344521200Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 09:56:59.346003 env[1216]: time="2024-02-09T09:56:59.344846880Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 09:56:59.346003 env[1216]: time="2024-02-09T09:56:59.344883240Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 09:56:59.346003 env[1216]: time="2024-02-09T09:56:59.344927680Z" level=info msg="containerd successfully booted in 0.080496s" Feb 9 09:56:59.350109 env[1216]: time="2024-02-09T09:56:59.350076960Z" level=info msg="Start subscribing containerd event" Feb 9 09:56:59.350228 env[1216]: time="2024-02-09T09:56:59.350211720Z" level=info msg="Start recovering state" Feb 9 09:56:59.350359 env[1216]: time="2024-02-09T09:56:59.350319400Z" level=info msg="Start event monitor" Feb 9 09:56:59.350432 env[1216]: time="2024-02-09T09:56:59.350418120Z" level=info msg="Start snapshots syncer" Feb 9 09:56:59.350484 env[1216]: time="2024-02-09T09:56:59.350472480Z" level=info msg="Start cni network conf syncer for default" Feb 9 09:56:59.350539 env[1216]: time="2024-02-09T09:56:59.350527920Z" level=info msg="Start streaming server" Feb 9 09:56:59.361597 locksmithd[1249]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 09:56:59.365898 tar[1211]: ./vrf Feb 9 09:56:59.394714 tar[1211]: ./bridge Feb 9 09:56:59.428438 tar[1211]: ./tuning Feb 9 09:56:59.455980 tar[1211]: ./firewall Feb 9 09:56:59.485855 tar[1211]: ./host-device Feb 9 09:56:59.512290 tar[1211]: ./sbr Feb 9 09:56:59.536244 tar[1211]: ./loopback Feb 9 09:56:59.559489 tar[1211]: ./dhcp Feb 9 09:56:59.599402 systemd[1]: Finished prepare-critools.service. Feb 9 09:56:59.625478 tar[1211]: ./ptp Feb 9 09:56:59.653629 tar[1211]: ./ipvlan Feb 9 09:56:59.681235 tar[1211]: ./bandwidth Feb 9 09:56:59.716278 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 09:57:00.491458 systemd-networkd[1100]: eth0: Gained IPv6LL Feb 9 09:57:00.664176 sshd_keygen[1217]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 09:57:00.681432 systemd[1]: Finished sshd-keygen.service. Feb 9 09:57:00.683660 systemd[1]: Starting issuegen.service... Feb 9 09:57:00.688071 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 09:57:00.688261 systemd[1]: Finished issuegen.service. Feb 9 09:57:00.690340 systemd[1]: Starting systemd-user-sessions.service... Feb 9 09:57:00.695716 systemd[1]: Finished systemd-user-sessions.service. Feb 9 09:57:00.697906 systemd[1]: Started getty@tty1.service. Feb 9 09:57:00.699818 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 9 09:57:00.700794 systemd[1]: Reached target getty.target. Feb 9 09:57:00.701549 systemd[1]: Reached target multi-user.target. Feb 9 09:57:00.703279 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 09:57:00.709332 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 09:57:00.709525 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 09:57:00.710480 systemd[1]: Startup finished in 5.892s (kernel) + 4.789s (userspace) = 10.682s. Feb 9 09:57:03.156185 systemd[1]: Created slice system-sshd.slice. Feb 9 09:57:03.157331 systemd[1]: Started sshd@0-10.0.0.81:22-10.0.0.1:36004.service. Feb 9 09:57:03.208091 sshd[1287]: Accepted publickey for core from 10.0.0.1 port 36004 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:57:03.210076 sshd[1287]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:57:03.222939 systemd-logind[1200]: New session 1 of user core. Feb 9 09:57:03.223871 systemd[1]: Created slice user-500.slice. Feb 9 09:57:03.224871 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 09:57:03.235452 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 09:57:03.236797 systemd[1]: Starting user@500.service... Feb 9 09:57:03.239790 (systemd)[1292]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:57:03.297264 systemd[1292]: Queued start job for default target default.target. Feb 9 09:57:03.297504 systemd[1292]: Reached target paths.target. Feb 9 09:57:03.297519 systemd[1292]: Reached target sockets.target. Feb 9 09:57:03.297530 systemd[1292]: Reached target timers.target. Feb 9 09:57:03.297553 systemd[1292]: Reached target basic.target. Feb 9 09:57:03.297600 systemd[1292]: Reached target default.target. Feb 9 09:57:03.297622 systemd[1292]: Startup finished in 52ms. Feb 9 09:57:03.297706 systemd[1]: Started user@500.service. Feb 9 09:57:03.298679 systemd[1]: Started session-1.scope. Feb 9 09:57:03.347482 systemd[1]: Started sshd@1-10.0.0.81:22-10.0.0.1:36012.service. Feb 9 09:57:03.386176 sshd[1301]: Accepted publickey for core from 10.0.0.1 port 36012 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:57:03.387808 sshd[1301]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:57:03.391376 systemd-logind[1200]: New session 2 of user core. Feb 9 09:57:03.392184 systemd[1]: Started session-2.scope. Feb 9 09:57:03.453448 sshd[1301]: pam_unix(sshd:session): session closed for user core Feb 9 09:57:03.456679 systemd[1]: sshd@1-10.0.0.81:22-10.0.0.1:36012.service: Deactivated successfully. Feb 9 09:57:03.457521 systemd-logind[1200]: Session 2 logged out. Waiting for processes to exit. Feb 9 09:57:03.458694 systemd[1]: Started sshd@2-10.0.0.81:22-10.0.0.1:36020.service. Feb 9 09:57:03.459042 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 09:57:03.459638 systemd-logind[1200]: Removed session 2. Feb 9 09:57:03.499347 sshd[1308]: Accepted publickey for core from 10.0.0.1 port 36020 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:57:03.500513 sshd[1308]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:57:03.503764 systemd-logind[1200]: New session 3 of user core. Feb 9 09:57:03.504596 systemd[1]: Started session-3.scope. Feb 9 09:57:03.554016 sshd[1308]: pam_unix(sshd:session): session closed for user core Feb 9 09:57:03.556147 systemd[1]: Started sshd@3-10.0.0.81:22-10.0.0.1:36026.service. Feb 9 09:57:03.557017 systemd[1]: sshd@2-10.0.0.81:22-10.0.0.1:36020.service: Deactivated successfully. Feb 9 09:57:03.558194 systemd-logind[1200]: Session 3 logged out. Waiting for processes to exit. Feb 9 09:57:03.558442 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 09:57:03.559302 systemd-logind[1200]: Removed session 3. Feb 9 09:57:03.594257 sshd[1313]: Accepted publickey for core from 10.0.0.1 port 36026 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:57:03.595482 sshd[1313]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:57:03.598892 systemd-logind[1200]: New session 4 of user core. Feb 9 09:57:03.601005 systemd[1]: Started session-4.scope. Feb 9 09:57:03.656000 sshd[1313]: pam_unix(sshd:session): session closed for user core Feb 9 09:57:03.658438 systemd[1]: Started sshd@4-10.0.0.81:22-10.0.0.1:36032.service. Feb 9 09:57:03.659281 systemd[1]: sshd@3-10.0.0.81:22-10.0.0.1:36026.service: Deactivated successfully. Feb 9 09:57:03.660431 systemd-logind[1200]: Session 4 logged out. Waiting for processes to exit. Feb 9 09:57:03.660578 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 09:57:03.661256 systemd-logind[1200]: Removed session 4. Feb 9 09:57:03.698365 sshd[1320]: Accepted publickey for core from 10.0.0.1 port 36032 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:57:03.699700 sshd[1320]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:57:03.703142 systemd-logind[1200]: New session 5 of user core. Feb 9 09:57:03.704105 systemd[1]: Started session-5.scope. Feb 9 09:57:03.763847 sudo[1326]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 09:57:03.764051 sudo[1326]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 09:57:04.273419 systemd[1]: Reloading. Feb 9 09:57:04.317387 /usr/lib/systemd/system-generators/torcx-generator[1356]: time="2024-02-09T09:57:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:57:04.317718 /usr/lib/systemd/system-generators/torcx-generator[1356]: time="2024-02-09T09:57:04Z" level=info msg="torcx already run" Feb 9 09:57:04.375626 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:57:04.375645 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:57:04.391752 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:57:04.451305 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 09:57:04.458522 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 09:57:04.458946 systemd[1]: Reached target network-online.target. Feb 9 09:57:04.460415 systemd[1]: Started kubelet.service. Feb 9 09:57:04.471059 systemd[1]: Starting coreos-metadata.service... Feb 9 09:57:04.477674 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 9 09:57:04.477885 systemd[1]: Finished coreos-metadata.service. Feb 9 09:57:04.629641 kubelet[1401]: E0209 09:57:04.629567 1401 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 09:57:04.632088 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:57:04.632234 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:57:04.748594 systemd[1]: Stopped kubelet.service. Feb 9 09:57:04.763042 systemd[1]: Reloading. Feb 9 09:57:04.813438 /usr/lib/systemd/system-generators/torcx-generator[1474]: time="2024-02-09T09:57:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:57:04.813469 /usr/lib/systemd/system-generators/torcx-generator[1474]: time="2024-02-09T09:57:04Z" level=info msg="torcx already run" Feb 9 09:57:04.875843 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:57:04.875861 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:57:04.891893 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:57:04.958590 systemd[1]: Started kubelet.service. Feb 9 09:57:04.996645 kubelet[1518]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:57:04.996645 kubelet[1518]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:57:04.996957 kubelet[1518]: I0209 09:57:04.996747 1518 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 09:57:04.998161 kubelet[1518]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:57:04.998161 kubelet[1518]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:57:05.833726 kubelet[1518]: I0209 09:57:05.833682 1518 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 09:57:05.833726 kubelet[1518]: I0209 09:57:05.833714 1518 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 09:57:05.833986 kubelet[1518]: I0209 09:57:05.833960 1518 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 09:57:05.839885 kubelet[1518]: I0209 09:57:05.839866 1518 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:57:05.841922 kubelet[1518]: W0209 09:57:05.841902 1518 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 09:57:05.842820 kubelet[1518]: I0209 09:57:05.842806 1518 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 09:57:05.843362 kubelet[1518]: I0209 09:57:05.843350 1518 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 09:57:05.843431 kubelet[1518]: I0209 09:57:05.843420 1518 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 09:57:05.843569 kubelet[1518]: I0209 09:57:05.843559 1518 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 09:57:05.843610 kubelet[1518]: I0209 09:57:05.843572 1518 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 09:57:05.843789 kubelet[1518]: I0209 09:57:05.843777 1518 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:57:05.850158 kubelet[1518]: I0209 09:57:05.850137 1518 kubelet.go:398] "Attempting to sync node with API server" Feb 9 09:57:05.850267 kubelet[1518]: I0209 09:57:05.850255 1518 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 09:57:05.850467 kubelet[1518]: I0209 09:57:05.850456 1518 kubelet.go:297] "Adding apiserver pod source" Feb 9 09:57:05.850552 kubelet[1518]: I0209 09:57:05.850541 1518 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 09:57:05.851614 kubelet[1518]: E0209 09:57:05.851592 1518 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:05.851870 kubelet[1518]: E0209 09:57:05.851857 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:05.851924 kubelet[1518]: I0209 09:57:05.851890 1518 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 09:57:05.853144 kubelet[1518]: W0209 09:57:05.853127 1518 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 09:57:05.853843 kubelet[1518]: I0209 09:57:05.853822 1518 server.go:1186] "Started kubelet" Feb 9 09:57:05.854523 kubelet[1518]: I0209 09:57:05.854489 1518 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 09:57:05.855403 kubelet[1518]: E0209 09:57:05.855370 1518 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 09:57:05.855403 kubelet[1518]: E0209 09:57:05.855402 1518 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 09:57:05.856835 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 09:57:05.856887 kubelet[1518]: I0209 09:57:05.856861 1518 server.go:451] "Adding debug handlers to kubelet server" Feb 9 09:57:05.857056 kubelet[1518]: I0209 09:57:05.857037 1518 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 09:57:05.857252 kubelet[1518]: I0209 09:57:05.857227 1518 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 09:57:05.858378 kubelet[1518]: I0209 09:57:05.858358 1518 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 09:57:05.858950 kubelet[1518]: E0209 09:57:05.858929 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:05.860054 kubelet[1518]: W0209 09:57:05.860013 1518 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:57:05.860106 kubelet[1518]: E0209 09:57:05.860061 1518 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:57:05.866318 kubelet[1518]: E0209 09:57:05.866275 1518 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "10.0.0.81" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 09:57:05.866390 kubelet[1518]: W0209 09:57:05.866358 1518 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.81" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:57:05.866390 kubelet[1518]: E0209 09:57:05.866377 1518 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.81" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:57:05.866448 kubelet[1518]: W0209 09:57:05.866427 1518 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 09:57:05.866448 kubelet[1518]: E0209 09:57:05.866438 1518 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 09:57:05.866556 kubelet[1518]: E0209 09:57:05.866460 1518 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.81.17b2294e7fb81b68", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.81", UID:"10.0.0.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 57, 5, 853799272, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 57, 5, 853799272, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:57:05.867656 kubelet[1518]: E0209 09:57:05.867588 1518 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.81.17b2294e7fd06642", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.81", UID:"10.0.0.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 57, 5, 855391298, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 57, 5, 855391298, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:57:05.893013 kubelet[1518]: I0209 09:57:05.892974 1518 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 09:57:05.893013 kubelet[1518]: I0209 09:57:05.892990 1518 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 09:57:05.893013 kubelet[1518]: I0209 09:57:05.893006 1518 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:57:05.893472 kubelet[1518]: E0209 09:57:05.893390 1518 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.81.17b2294e82039b52", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.81", UID:"10.0.0.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.81 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 57, 5, 892301650, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 57, 5, 892301650, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:57:05.894300 kubelet[1518]: E0209 09:57:05.894231 1518 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.81.17b2294e8203c471", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.81", UID:"10.0.0.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.81 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 57, 5, 892312177, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 57, 5, 892312177, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:57:05.894670 kubelet[1518]: I0209 09:57:05.894652 1518 policy_none.go:49] "None policy: Start" Feb 9 09:57:05.895152 kubelet[1518]: E0209 09:57:05.895081 1518 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.81.17b2294e8203d059", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.81", UID:"10.0.0.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.81 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 57, 5, 892315225, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 57, 5, 892315225, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:57:05.895378 kubelet[1518]: I0209 09:57:05.895362 1518 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 09:57:05.895465 kubelet[1518]: I0209 09:57:05.895455 1518 state_mem.go:35] "Initializing new in-memory state store" Feb 9 09:57:05.899630 kubelet[1518]: I0209 09:57:05.899608 1518 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 09:57:05.899892 kubelet[1518]: I0209 09:57:05.899876 1518 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 09:57:05.901570 kubelet[1518]: E0209 09:57:05.901487 1518 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.81\" not found" Feb 9 09:57:05.901919 kubelet[1518]: E0209 09:57:05.901847 1518 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.81.17b2294e8289b0e2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.81", UID:"10.0.0.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 57, 5, 901088994, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 57, 5, 901088994, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:57:05.960237 kubelet[1518]: I0209 09:57:05.960198 1518 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.81" Feb 9 09:57:05.961735 kubelet[1518]: E0209 09:57:05.961656 1518 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.81.17b2294e82039b52", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.81", UID:"10.0.0.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.81 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 57, 5, 892301650, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 57, 5, 960153220, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.81.17b2294e82039b52" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:57:05.962134 kubelet[1518]: E0209 09:57:05.962103 1518 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.81" Feb 9 09:57:05.962670 kubelet[1518]: E0209 09:57:05.962600 1518 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.81.17b2294e8203c471", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.81", UID:"10.0.0.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.81 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 57, 5, 892312177, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 57, 5, 960166280, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.81.17b2294e8203c471" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:57:05.963468 kubelet[1518]: E0209 09:57:05.963406 1518 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.81.17b2294e8203d059", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.81", UID:"10.0.0.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.81 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 57, 5, 892315225, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 57, 5, 960169288, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.81.17b2294e8203d059" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:57:05.988956 kubelet[1518]: I0209 09:57:05.988928 1518 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 09:57:06.009534 kubelet[1518]: I0209 09:57:06.009504 1518 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 09:57:06.009534 kubelet[1518]: I0209 09:57:06.009528 1518 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 09:57:06.009866 kubelet[1518]: I0209 09:57:06.009547 1518 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 09:57:06.009866 kubelet[1518]: E0209 09:57:06.009592 1518 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 09:57:06.011081 kubelet[1518]: W0209 09:57:06.011058 1518 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 09:57:06.011169 kubelet[1518]: E0209 09:57:06.011089 1518 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 09:57:06.067985 kubelet[1518]: E0209 09:57:06.067946 1518 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "10.0.0.81" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 09:57:06.163078 kubelet[1518]: I0209 09:57:06.162992 1518 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.81" Feb 9 09:57:06.164350 kubelet[1518]: E0209 09:57:06.164248 1518 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.81.17b2294e82039b52", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.81", UID:"10.0.0.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.81 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 57, 5, 892301650, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 57, 6, 162951813, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.81.17b2294e82039b52" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:57:06.164795 kubelet[1518]: E0209 09:57:06.164750 1518 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.81" Feb 9 09:57:06.165277 kubelet[1518]: E0209 09:57:06.165204 1518 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.81.17b2294e8203c471", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.81", UID:"10.0.0.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.81 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 57, 5, 892312177, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 57, 6, 162965683, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.81.17b2294e8203c471" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:57:06.256465 kubelet[1518]: E0209 09:57:06.256379 1518 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.81.17b2294e8203d059", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.81", UID:"10.0.0.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.81 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 57, 5, 892315225, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 57, 6, 162968616, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.81.17b2294e8203d059" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:57:06.469532 kubelet[1518]: E0209 09:57:06.469441 1518 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "10.0.0.81" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 09:57:06.566310 kubelet[1518]: I0209 09:57:06.566259 1518 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.81" Feb 9 09:57:06.567858 kubelet[1518]: E0209 09:57:06.567826 1518 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.81" Feb 9 09:57:06.567924 kubelet[1518]: E0209 09:57:06.567822 1518 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.81.17b2294e82039b52", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.81", UID:"10.0.0.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.81 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 57, 5, 892301650, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 57, 6, 566224128, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.81.17b2294e82039b52" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:57:06.656334 kubelet[1518]: E0209 09:57:06.656244 1518 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.81.17b2294e8203c471", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.81", UID:"10.0.0.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.81 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 57, 5, 892312177, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 57, 6, 566229438, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.81.17b2294e8203c471" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:57:06.853041 kubelet[1518]: E0209 09:57:06.853001 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:06.856540 kubelet[1518]: E0209 09:57:06.856455 1518 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.81.17b2294e8203d059", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.81", UID:"10.0.0.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.81 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 57, 5, 892315225, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 57, 6, 566232767, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.81.17b2294e8203d059" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:57:06.948879 kubelet[1518]: W0209 09:57:06.948841 1518 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:57:06.948879 kubelet[1518]: E0209 09:57:06.948881 1518 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:57:06.956884 kubelet[1518]: W0209 09:57:06.956852 1518 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.81" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:57:06.956884 kubelet[1518]: E0209 09:57:06.956881 1518 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.81" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:57:06.959705 kubelet[1518]: W0209 09:57:06.959678 1518 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 09:57:06.959705 kubelet[1518]: E0209 09:57:06.959704 1518 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 09:57:07.270939 kubelet[1518]: E0209 09:57:07.270842 1518 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "10.0.0.81" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 09:57:07.368933 kubelet[1518]: I0209 09:57:07.368899 1518 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.81" Feb 9 09:57:07.370243 kubelet[1518]: E0209 09:57:07.370211 1518 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.81" Feb 9 09:57:07.370243 kubelet[1518]: E0209 09:57:07.370183 1518 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.81.17b2294e82039b52", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.81", UID:"10.0.0.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.81 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 57, 5, 892301650, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 57, 7, 368866540, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.81.17b2294e82039b52" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:57:07.371090 kubelet[1518]: E0209 09:57:07.371020 1518 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.81.17b2294e8203c471", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.81", UID:"10.0.0.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.81 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 57, 5, 892312177, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 57, 7, 368871024, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.81.17b2294e8203c471" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:57:07.456482 kubelet[1518]: E0209 09:57:07.456311 1518 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.81.17b2294e8203d059", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.81", UID:"10.0.0.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.81 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 57, 5, 892315225, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 57, 7, 368873920, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.81.17b2294e8203d059" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:57:07.465225 kubelet[1518]: W0209 09:57:07.465193 1518 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 09:57:07.465225 kubelet[1518]: E0209 09:57:07.465223 1518 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 09:57:07.853378 kubelet[1518]: E0209 09:57:07.853312 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:08.571846 kubelet[1518]: W0209 09:57:08.571803 1518 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.81" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:57:08.571846 kubelet[1518]: E0209 09:57:08.571840 1518 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.81" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:57:08.645146 kubelet[1518]: W0209 09:57:08.645109 1518 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:57:08.645199 kubelet[1518]: E0209 09:57:08.645145 1518 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:57:08.774856 kubelet[1518]: W0209 09:57:08.774816 1518 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 09:57:08.774856 kubelet[1518]: E0209 09:57:08.774849 1518 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 09:57:08.854170 kubelet[1518]: E0209 09:57:08.854102 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:08.872186 kubelet[1518]: E0209 09:57:08.872152 1518 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "10.0.0.81" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 09:57:08.971156 kubelet[1518]: I0209 09:57:08.971125 1518 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.81" Feb 9 09:57:08.972461 kubelet[1518]: E0209 09:57:08.972376 1518 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.81.17b2294e82039b52", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.81", UID:"10.0.0.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.81 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 57, 5, 892301650, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 57, 8, 971091262, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.81.17b2294e82039b52" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:57:08.972682 kubelet[1518]: E0209 09:57:08.972655 1518 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.81" Feb 9 09:57:08.973242 kubelet[1518]: E0209 09:57:08.973183 1518 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.81.17b2294e8203c471", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.81", UID:"10.0.0.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.81 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 57, 5, 892312177, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 57, 8, 971096187, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.81.17b2294e8203c471" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:57:08.974000 kubelet[1518]: E0209 09:57:08.973932 1518 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.81.17b2294e8203d059", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.81", UID:"10.0.0.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.81 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 57, 5, 892315225, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 57, 8, 971099364, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.81.17b2294e8203d059" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:57:09.854891 kubelet[1518]: E0209 09:57:09.854850 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:09.978903 kubelet[1518]: W0209 09:57:09.978870 1518 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 09:57:09.978903 kubelet[1518]: E0209 09:57:09.978904 1518 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 09:57:10.855908 kubelet[1518]: E0209 09:57:10.855836 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:11.856569 kubelet[1518]: E0209 09:57:11.856521 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:12.073453 kubelet[1518]: E0209 09:57:12.073417 1518 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "10.0.0.81" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 09:57:12.173840 kubelet[1518]: I0209 09:57:12.173742 1518 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.81" Feb 9 09:57:12.175477 kubelet[1518]: E0209 09:57:12.175449 1518 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.81" Feb 9 09:57:12.175477 kubelet[1518]: E0209 09:57:12.175419 1518 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.81.17b2294e82039b52", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.81", UID:"10.0.0.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.81 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 57, 5, 892301650, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 57, 12, 173699992, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.81.17b2294e82039b52" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:57:12.176365 kubelet[1518]: E0209 09:57:12.176281 1518 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.81.17b2294e8203c471", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.81", UID:"10.0.0.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.81 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 57, 5, 892312177, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 57, 12, 173712699, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.81.17b2294e8203c471" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:57:12.177201 kubelet[1518]: E0209 09:57:12.177117 1518 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.81.17b2294e8203d059", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.81", UID:"10.0.0.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.81 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 57, 5, 892315225, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 57, 12, 173716125, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.81.17b2294e8203d059" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:57:12.857342 kubelet[1518]: E0209 09:57:12.857256 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:13.066481 kubelet[1518]: W0209 09:57:13.066447 1518 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:57:13.066481 kubelet[1518]: E0209 09:57:13.066478 1518 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:57:13.396914 kubelet[1518]: W0209 09:57:13.396875 1518 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 09:57:13.396914 kubelet[1518]: E0209 09:57:13.396913 1518 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 09:57:13.858380 kubelet[1518]: E0209 09:57:13.858339 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:14.430929 kubelet[1518]: W0209 09:57:14.430891 1518 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.81" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:57:14.430929 kubelet[1518]: E0209 09:57:14.430928 1518 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.81" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:57:14.858458 kubelet[1518]: E0209 09:57:14.858429 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:15.554385 kubelet[1518]: W0209 09:57:15.554336 1518 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 09:57:15.554385 kubelet[1518]: E0209 09:57:15.554375 1518 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 09:57:15.836818 kubelet[1518]: I0209 09:57:15.836717 1518 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 9 09:57:15.859193 kubelet[1518]: E0209 09:57:15.859162 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:15.902402 kubelet[1518]: E0209 09:57:15.902375 1518 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.81\" not found" Feb 9 09:57:16.224157 kubelet[1518]: E0209 09:57:16.224062 1518 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.81" not found Feb 9 09:57:16.860719 kubelet[1518]: E0209 09:57:16.860656 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:17.270262 kubelet[1518]: E0209 09:57:17.270154 1518 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.81" not found Feb 9 09:57:17.861479 kubelet[1518]: E0209 09:57:17.861440 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:18.479010 kubelet[1518]: E0209 09:57:18.478980 1518 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.81\" not found" node="10.0.0.81" Feb 9 09:57:18.577181 kubelet[1518]: I0209 09:57:18.576808 1518 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.81" Feb 9 09:57:18.673845 kubelet[1518]: I0209 09:57:18.673796 1518 kubelet_node_status.go:73] "Successfully registered node" node="10.0.0.81" Feb 9 09:57:18.678008 kubelet[1518]: I0209 09:57:18.677976 1518 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 9 09:57:18.678451 env[1216]: time="2024-02-09T09:57:18.678349470Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 09:57:18.678904 kubelet[1518]: I0209 09:57:18.678883 1518 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 9 09:57:18.692203 kubelet[1518]: E0209 09:57:18.692125 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:18.792444 kubelet[1518]: E0209 09:57:18.792417 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:18.862129 kubelet[1518]: E0209 09:57:18.862098 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:18.893682 kubelet[1518]: E0209 09:57:18.893643 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:18.972525 sudo[1326]: pam_unix(sudo:session): session closed for user root Feb 9 09:57:18.974503 sshd[1320]: pam_unix(sshd:session): session closed for user core Feb 9 09:57:18.976816 systemd[1]: sshd@4-10.0.0.81:22-10.0.0.1:36032.service: Deactivated successfully. Feb 9 09:57:18.977871 systemd-logind[1200]: Session 5 logged out. Waiting for processes to exit. Feb 9 09:57:18.977899 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 09:57:18.978911 systemd-logind[1200]: Removed session 5. Feb 9 09:57:18.994363 kubelet[1518]: E0209 09:57:18.994313 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:19.095068 kubelet[1518]: E0209 09:57:19.094973 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:19.195474 kubelet[1518]: E0209 09:57:19.195426 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:19.295991 kubelet[1518]: E0209 09:57:19.295956 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:19.396654 kubelet[1518]: E0209 09:57:19.396551 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:19.496951 kubelet[1518]: E0209 09:57:19.496914 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:19.597328 kubelet[1518]: E0209 09:57:19.597306 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:19.697779 kubelet[1518]: E0209 09:57:19.697680 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:19.798166 kubelet[1518]: E0209 09:57:19.798127 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:19.862452 kubelet[1518]: E0209 09:57:19.862414 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:19.898639 kubelet[1518]: E0209 09:57:19.898586 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:19.999206 kubelet[1518]: E0209 09:57:19.999089 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:20.099612 kubelet[1518]: E0209 09:57:20.099552 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:20.200030 kubelet[1518]: E0209 09:57:20.199961 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:20.300380 kubelet[1518]: E0209 09:57:20.300346 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:20.400911 kubelet[1518]: E0209 09:57:20.400876 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:20.501302 kubelet[1518]: E0209 09:57:20.501264 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:20.601690 kubelet[1518]: E0209 09:57:20.601615 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:20.702061 kubelet[1518]: E0209 09:57:20.702030 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:20.802398 kubelet[1518]: E0209 09:57:20.802375 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:20.863241 kubelet[1518]: E0209 09:57:20.863117 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:20.902553 kubelet[1518]: E0209 09:57:20.902528 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:21.003408 kubelet[1518]: E0209 09:57:21.003337 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:21.103851 kubelet[1518]: E0209 09:57:21.103795 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:21.204335 kubelet[1518]: E0209 09:57:21.204209 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:21.304666 kubelet[1518]: E0209 09:57:21.304601 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:21.405120 kubelet[1518]: E0209 09:57:21.405081 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:21.505579 kubelet[1518]: E0209 09:57:21.505467 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:21.605940 kubelet[1518]: E0209 09:57:21.605894 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:21.706359 kubelet[1518]: E0209 09:57:21.706297 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:21.806770 kubelet[1518]: E0209 09:57:21.806725 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:21.863520 kubelet[1518]: E0209 09:57:21.863472 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:21.907588 kubelet[1518]: E0209 09:57:21.907549 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:22.008219 kubelet[1518]: E0209 09:57:22.008177 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:22.108726 kubelet[1518]: E0209 09:57:22.108598 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:22.209073 kubelet[1518]: E0209 09:57:22.209038 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:22.309582 kubelet[1518]: E0209 09:57:22.309530 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:22.410214 kubelet[1518]: E0209 09:57:22.410097 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:22.510551 kubelet[1518]: E0209 09:57:22.510500 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:22.611015 kubelet[1518]: E0209 09:57:22.610969 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:22.711636 kubelet[1518]: E0209 09:57:22.711486 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:22.811911 kubelet[1518]: E0209 09:57:22.811865 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:22.863587 kubelet[1518]: E0209 09:57:22.863535 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:22.912841 kubelet[1518]: E0209 09:57:22.912803 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:23.013595 kubelet[1518]: E0209 09:57:23.013482 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:23.113932 kubelet[1518]: E0209 09:57:23.113875 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:23.214495 kubelet[1518]: E0209 09:57:23.214440 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:23.314919 kubelet[1518]: E0209 09:57:23.314876 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:23.415505 kubelet[1518]: E0209 09:57:23.415456 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:23.515967 kubelet[1518]: E0209 09:57:23.515931 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:23.616433 kubelet[1518]: E0209 09:57:23.616315 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:23.716873 kubelet[1518]: E0209 09:57:23.716823 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:23.817297 kubelet[1518]: E0209 09:57:23.817255 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:23.863889 kubelet[1518]: E0209 09:57:23.863852 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:23.918235 kubelet[1518]: E0209 09:57:23.918132 1518 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.81\" not found" Feb 9 09:57:24.862804 kubelet[1518]: I0209 09:57:24.862767 1518 apiserver.go:52] "Watching apiserver" Feb 9 09:57:24.864498 kubelet[1518]: E0209 09:57:24.864473 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:24.865775 kubelet[1518]: I0209 09:57:24.865746 1518 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:57:24.865824 kubelet[1518]: I0209 09:57:24.865814 1518 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:57:24.960025 kubelet[1518]: I0209 09:57:24.959976 1518 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 09:57:25.045411 kubelet[1518]: I0209 09:57:25.045363 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-cilium-run\") pod \"cilium-d8f26\" (UID: \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\") " pod="kube-system/cilium-d8f26" Feb 9 09:57:25.045502 kubelet[1518]: I0209 09:57:25.045465 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-bpf-maps\") pod \"cilium-d8f26\" (UID: \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\") " pod="kube-system/cilium-d8f26" Feb 9 09:57:25.045502 kubelet[1518]: I0209 09:57:25.045490 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-cilium-cgroup\") pod \"cilium-d8f26\" (UID: \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\") " pod="kube-system/cilium-d8f26" Feb 9 09:57:25.045578 kubelet[1518]: I0209 09:57:25.045544 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-etc-cni-netd\") pod \"cilium-d8f26\" (UID: \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\") " pod="kube-system/cilium-d8f26" Feb 9 09:57:25.045610 kubelet[1518]: I0209 09:57:25.045585 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-xtables-lock\") pod \"cilium-d8f26\" (UID: \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\") " pod="kube-system/cilium-d8f26" Feb 9 09:57:25.045610 kubelet[1518]: I0209 09:57:25.045608 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-host-proc-sys-net\") pod \"cilium-d8f26\" (UID: \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\") " pod="kube-system/cilium-d8f26" Feb 9 09:57:25.045675 kubelet[1518]: I0209 09:57:25.045628 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7dcbe0f4-cf08-476d-9e8d-419efd47aabe-xtables-lock\") pod \"kube-proxy-kvndh\" (UID: \"7dcbe0f4-cf08-476d-9e8d-419efd47aabe\") " pod="kube-system/kube-proxy-kvndh" Feb 9 09:57:25.045675 kubelet[1518]: I0209 09:57:25.045654 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7dcbe0f4-cf08-476d-9e8d-419efd47aabe-lib-modules\") pod \"kube-proxy-kvndh\" (UID: \"7dcbe0f4-cf08-476d-9e8d-419efd47aabe\") " pod="kube-system/kube-proxy-kvndh" Feb 9 09:57:25.045719 kubelet[1518]: I0209 09:57:25.045687 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-hostproc\") pod \"cilium-d8f26\" (UID: \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\") " pod="kube-system/cilium-d8f26" Feb 9 09:57:25.045719 kubelet[1518]: I0209 09:57:25.045717 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-cni-path\") pod \"cilium-d8f26\" (UID: \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\") " pod="kube-system/cilium-d8f26" Feb 9 09:57:25.045764 kubelet[1518]: I0209 09:57:25.045754 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-lib-modules\") pod \"cilium-d8f26\" (UID: \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\") " pod="kube-system/cilium-d8f26" Feb 9 09:57:25.045849 kubelet[1518]: I0209 09:57:25.045783 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-clustermesh-secrets\") pod \"cilium-d8f26\" (UID: \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\") " pod="kube-system/cilium-d8f26" Feb 9 09:57:25.045882 kubelet[1518]: I0209 09:57:25.045851 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-cilium-config-path\") pod \"cilium-d8f26\" (UID: \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\") " pod="kube-system/cilium-d8f26" Feb 9 09:57:25.045907 kubelet[1518]: I0209 09:57:25.045887 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7dcbe0f4-cf08-476d-9e8d-419efd47aabe-kube-proxy\") pod \"kube-proxy-kvndh\" (UID: \"7dcbe0f4-cf08-476d-9e8d-419efd47aabe\") " pod="kube-system/kube-proxy-kvndh" Feb 9 09:57:25.045930 kubelet[1518]: I0209 09:57:25.045916 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhp98\" (UniqueName: \"kubernetes.io/projected/7dcbe0f4-cf08-476d-9e8d-419efd47aabe-kube-api-access-qhp98\") pod \"kube-proxy-kvndh\" (UID: \"7dcbe0f4-cf08-476d-9e8d-419efd47aabe\") " pod="kube-system/kube-proxy-kvndh" Feb 9 09:57:25.045957 kubelet[1518]: I0209 09:57:25.045945 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-host-proc-sys-kernel\") pod \"cilium-d8f26\" (UID: \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\") " pod="kube-system/cilium-d8f26" Feb 9 09:57:25.045988 kubelet[1518]: I0209 09:57:25.045976 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-hubble-tls\") pod \"cilium-d8f26\" (UID: \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\") " pod="kube-system/cilium-d8f26" Feb 9 09:57:25.046014 kubelet[1518]: I0209 09:57:25.046005 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thqcx\" (UniqueName: \"kubernetes.io/projected/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-kube-api-access-thqcx\") pod \"cilium-d8f26\" (UID: \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\") " pod="kube-system/cilium-d8f26" Feb 9 09:57:25.046043 kubelet[1518]: I0209 09:57:25.046028 1518 reconciler.go:41] "Reconciler: start to sync state" Feb 9 09:57:25.171180 kubelet[1518]: E0209 09:57:25.171095 1518 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:57:25.171928 env[1216]: time="2024-02-09T09:57:25.171663240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d8f26,Uid:e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033,Namespace:kube-system,Attempt:0,}" Feb 9 09:57:25.468395 kubelet[1518]: E0209 09:57:25.468283 1518 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:57:25.468930 env[1216]: time="2024-02-09T09:57:25.468889609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kvndh,Uid:7dcbe0f4-cf08-476d-9e8d-419efd47aabe,Namespace:kube-system,Attempt:0,}" Feb 9 09:57:25.702153 env[1216]: time="2024-02-09T09:57:25.702106567Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:25.703770 env[1216]: time="2024-02-09T09:57:25.703736290Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:25.704619 env[1216]: time="2024-02-09T09:57:25.704586306Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:25.706286 env[1216]: time="2024-02-09T09:57:25.706259678Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:25.708649 env[1216]: time="2024-02-09T09:57:25.708620225Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:25.710948 env[1216]: time="2024-02-09T09:57:25.710915940Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:25.711750 env[1216]: time="2024-02-09T09:57:25.711724546Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:25.713365 env[1216]: time="2024-02-09T09:57:25.713342479Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:25.738656 env[1216]: time="2024-02-09T09:57:25.738427384Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:57:25.738656 env[1216]: time="2024-02-09T09:57:25.738466595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:57:25.738656 env[1216]: time="2024-02-09T09:57:25.738476668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:57:25.738763 env[1216]: time="2024-02-09T09:57:25.738723047Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5ef5751dc3d9416fde29d4c3535c2ef3dc6edd74215a5065dfba34969aa0d2c6 pid=1622 runtime=io.containerd.runc.v2 Feb 9 09:57:25.739876 env[1216]: time="2024-02-09T09:57:25.739816964Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:57:25.739876 env[1216]: time="2024-02-09T09:57:25.739848101Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:57:25.739876 env[1216]: time="2024-02-09T09:57:25.739857814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:57:25.740495 env[1216]: time="2024-02-09T09:57:25.740451978Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a9e3ac4225066803c10954d45dc6d695d46586972ea72fde49ca550fbbb66b91 pid=1623 runtime=io.containerd.runc.v2 Feb 9 09:57:25.808953 env[1216]: time="2024-02-09T09:57:25.808902529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d8f26,Uid:e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ef5751dc3d9416fde29d4c3535c2ef3dc6edd74215a5065dfba34969aa0d2c6\"" Feb 9 09:57:25.809059 env[1216]: time="2024-02-09T09:57:25.808986387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kvndh,Uid:7dcbe0f4-cf08-476d-9e8d-419efd47aabe,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9e3ac4225066803c10954d45dc6d695d46586972ea72fde49ca550fbbb66b91\"" Feb 9 09:57:25.810148 kubelet[1518]: E0209 09:57:25.809674 1518 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:57:25.810148 kubelet[1518]: E0209 09:57:25.809699 1518 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:57:25.810490 env[1216]: time="2024-02-09T09:57:25.810454190Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 09:57:25.850647 kubelet[1518]: E0209 09:57:25.850604 1518 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:25.864898 kubelet[1518]: E0209 09:57:25.864866 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:26.153789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3553949089.mount: Deactivated successfully. Feb 9 09:57:26.774944 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount624759151.mount: Deactivated successfully. Feb 9 09:57:26.864977 kubelet[1518]: E0209 09:57:26.864925 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:27.108532 env[1216]: time="2024-02-09T09:57:27.108486706Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:27.109956 env[1216]: time="2024-02-09T09:57:27.109905829Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:27.111233 env[1216]: time="2024-02-09T09:57:27.111205938Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:27.112491 env[1216]: time="2024-02-09T09:57:27.112466110Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:27.113568 env[1216]: time="2024-02-09T09:57:27.113533910Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926\"" Feb 9 09:57:27.114334 env[1216]: time="2024-02-09T09:57:27.114283529Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 09:57:27.115741 env[1216]: time="2024-02-09T09:57:27.115711446Z" level=info msg="CreateContainer within sandbox \"a9e3ac4225066803c10954d45dc6d695d46586972ea72fde49ca550fbbb66b91\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 09:57:27.124348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1207126128.mount: Deactivated successfully. Feb 9 09:57:27.128458 env[1216]: time="2024-02-09T09:57:27.128421504Z" level=info msg="CreateContainer within sandbox \"a9e3ac4225066803c10954d45dc6d695d46586972ea72fde49ca550fbbb66b91\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"efc0ad4a2bcbb39e69e407955cc8a68d4998676e4125af89e8a4139a5635e4c3\"" Feb 9 09:57:27.129096 env[1216]: time="2024-02-09T09:57:27.129053389Z" level=info msg="StartContainer for \"efc0ad4a2bcbb39e69e407955cc8a68d4998676e4125af89e8a4139a5635e4c3\"" Feb 9 09:57:27.184089 env[1216]: time="2024-02-09T09:57:27.184030255Z" level=info msg="StartContainer for \"efc0ad4a2bcbb39e69e407955cc8a68d4998676e4125af89e8a4139a5635e4c3\" returns successfully" Feb 9 09:57:27.865496 kubelet[1518]: E0209 09:57:27.865448 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:28.043438 kubelet[1518]: E0209 09:57:28.043409 1518 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:57:28.054825 kubelet[1518]: I0209 09:57:28.054790 1518 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-kvndh" podStartSLOduration=-9.223372026800047e+09 pod.CreationTimestamp="2024-02-09 09:57:18 +0000 UTC" firstStartedPulling="2024-02-09 09:57:25.810044571 +0000 UTC m=+20.848246803" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:57:28.05464321 +0000 UTC m=+23.092845442" watchObservedRunningTime="2024-02-09 09:57:28.054729807 +0000 UTC m=+23.092932039" Feb 9 09:57:28.865773 kubelet[1518]: E0209 09:57:28.865722 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:29.045035 kubelet[1518]: E0209 09:57:29.044997 1518 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:57:29.865863 kubelet[1518]: E0209 09:57:29.865820 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:30.353523 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3898722663.mount: Deactivated successfully. Feb 9 09:57:30.866201 kubelet[1518]: E0209 09:57:30.866160 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:31.866827 kubelet[1518]: E0209 09:57:31.866774 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:32.622743 env[1216]: time="2024-02-09T09:57:32.622694185Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:32.624278 env[1216]: time="2024-02-09T09:57:32.624237408Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:32.625993 env[1216]: time="2024-02-09T09:57:32.625966624Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:32.626853 env[1216]: time="2024-02-09T09:57:32.626821832Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 9 09:57:32.629014 env[1216]: time="2024-02-09T09:57:32.628971872Z" level=info msg="CreateContainer within sandbox \"5ef5751dc3d9416fde29d4c3535c2ef3dc6edd74215a5065dfba34969aa0d2c6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 09:57:32.636097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2752049410.mount: Deactivated successfully. Feb 9 09:57:32.640200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1139111274.mount: Deactivated successfully. Feb 9 09:57:32.643779 env[1216]: time="2024-02-09T09:57:32.643742044Z" level=info msg="CreateContainer within sandbox \"5ef5751dc3d9416fde29d4c3535c2ef3dc6edd74215a5065dfba34969aa0d2c6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0e4b6c07018b1c70ba9c6918af5a94f3c2dad3b54c770d245578973dc207c74a\"" Feb 9 09:57:32.644260 env[1216]: time="2024-02-09T09:57:32.644239425Z" level=info msg="StartContainer for \"0e4b6c07018b1c70ba9c6918af5a94f3c2dad3b54c770d245578973dc207c74a\"" Feb 9 09:57:32.695364 env[1216]: time="2024-02-09T09:57:32.695305809Z" level=info msg="StartContainer for \"0e4b6c07018b1c70ba9c6918af5a94f3c2dad3b54c770d245578973dc207c74a\" returns successfully" Feb 9 09:57:32.850369 env[1216]: time="2024-02-09T09:57:32.850313853Z" level=info msg="shim disconnected" id=0e4b6c07018b1c70ba9c6918af5a94f3c2dad3b54c770d245578973dc207c74a Feb 9 09:57:32.850559 env[1216]: time="2024-02-09T09:57:32.850373331Z" level=warning msg="cleaning up after shim disconnected" id=0e4b6c07018b1c70ba9c6918af5a94f3c2dad3b54c770d245578973dc207c74a namespace=k8s.io Feb 9 09:57:32.850559 env[1216]: time="2024-02-09T09:57:32.850385570Z" level=info msg="cleaning up dead shim" Feb 9 09:57:32.857297 env[1216]: time="2024-02-09T09:57:32.857244796Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:57:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1907 runtime=io.containerd.runc.v2\n" Feb 9 09:57:32.867484 kubelet[1518]: E0209 09:57:32.867436 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:33.051044 kubelet[1518]: E0209 09:57:33.051018 1518 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:57:33.053238 env[1216]: time="2024-02-09T09:57:33.053204262Z" level=info msg="CreateContainer within sandbox \"5ef5751dc3d9416fde29d4c3535c2ef3dc6edd74215a5065dfba34969aa0d2c6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 09:57:33.065284 env[1216]: time="2024-02-09T09:57:33.065238280Z" level=info msg="CreateContainer within sandbox \"5ef5751dc3d9416fde29d4c3535c2ef3dc6edd74215a5065dfba34969aa0d2c6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0f346bd014460bb96fdcdf20925fb6b87983f7200920789e081e18a139a3b3c3\"" Feb 9 09:57:33.065757 env[1216]: time="2024-02-09T09:57:33.065728343Z" level=info msg="StartContainer for \"0f346bd014460bb96fdcdf20925fb6b87983f7200920789e081e18a139a3b3c3\"" Feb 9 09:57:33.107386 env[1216]: time="2024-02-09T09:57:33.107339283Z" level=info msg="StartContainer for \"0f346bd014460bb96fdcdf20925fb6b87983f7200920789e081e18a139a3b3c3\" returns successfully" Feb 9 09:57:33.116790 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 09:57:33.117043 systemd[1]: Stopped systemd-sysctl.service. Feb 9 09:57:33.117209 systemd[1]: Stopping systemd-sysctl.service... Feb 9 09:57:33.119262 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:57:33.126057 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:57:33.137218 env[1216]: time="2024-02-09T09:57:33.137178277Z" level=info msg="shim disconnected" id=0f346bd014460bb96fdcdf20925fb6b87983f7200920789e081e18a139a3b3c3 Feb 9 09:57:33.137400 env[1216]: time="2024-02-09T09:57:33.137369270Z" level=warning msg="cleaning up after shim disconnected" id=0f346bd014460bb96fdcdf20925fb6b87983f7200920789e081e18a139a3b3c3 namespace=k8s.io Feb 9 09:57:33.137400 env[1216]: time="2024-02-09T09:57:33.137396389Z" level=info msg="cleaning up dead shim" Feb 9 09:57:33.144923 env[1216]: time="2024-02-09T09:57:33.144887967Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:57:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1970 runtime=io.containerd.runc.v2\n" Feb 9 09:57:33.634594 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e4b6c07018b1c70ba9c6918af5a94f3c2dad3b54c770d245578973dc207c74a-rootfs.mount: Deactivated successfully. Feb 9 09:57:33.867742 kubelet[1518]: E0209 09:57:33.867706 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:34.054457 kubelet[1518]: E0209 09:57:34.054432 1518 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:57:34.056215 env[1216]: time="2024-02-09T09:57:34.056175665Z" level=info msg="CreateContainer within sandbox \"5ef5751dc3d9416fde29d4c3535c2ef3dc6edd74215a5065dfba34969aa0d2c6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 09:57:34.074012 env[1216]: time="2024-02-09T09:57:34.073962436Z" level=info msg="CreateContainer within sandbox \"5ef5751dc3d9416fde29d4c3535c2ef3dc6edd74215a5065dfba34969aa0d2c6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"56f3ffcea3f49f047ebf745df0a5052a7b232d10f185e2516c7229b2c153c354\"" Feb 9 09:57:34.074444 env[1216]: time="2024-02-09T09:57:34.074411741Z" level=info msg="StartContainer for \"56f3ffcea3f49f047ebf745df0a5052a7b232d10f185e2516c7229b2c153c354\"" Feb 9 09:57:34.126836 env[1216]: time="2024-02-09T09:57:34.126781646Z" level=info msg="StartContainer for \"56f3ffcea3f49f047ebf745df0a5052a7b232d10f185e2516c7229b2c153c354\" returns successfully" Feb 9 09:57:34.153643 env[1216]: time="2024-02-09T09:57:34.153587798Z" level=info msg="shim disconnected" id=56f3ffcea3f49f047ebf745df0a5052a7b232d10f185e2516c7229b2c153c354 Feb 9 09:57:34.153643 env[1216]: time="2024-02-09T09:57:34.153644036Z" level=warning msg="cleaning up after shim disconnected" id=56f3ffcea3f49f047ebf745df0a5052a7b232d10f185e2516c7229b2c153c354 namespace=k8s.io Feb 9 09:57:34.153852 env[1216]: time="2024-02-09T09:57:34.153653955Z" level=info msg="cleaning up dead shim" Feb 9 09:57:34.160641 env[1216]: time="2024-02-09T09:57:34.160600165Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:57:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2029 runtime=io.containerd.runc.v2\n" Feb 9 09:57:34.634231 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56f3ffcea3f49f047ebf745df0a5052a7b232d10f185e2516c7229b2c153c354-rootfs.mount: Deactivated successfully. Feb 9 09:57:34.868685 kubelet[1518]: E0209 09:57:34.868653 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:35.057751 kubelet[1518]: E0209 09:57:35.057715 1518 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:57:35.062276 env[1216]: time="2024-02-09T09:57:35.062210477Z" level=info msg="CreateContainer within sandbox \"5ef5751dc3d9416fde29d4c3535c2ef3dc6edd74215a5065dfba34969aa0d2c6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 09:57:35.079585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1710054724.mount: Deactivated successfully. Feb 9 09:57:35.090516 env[1216]: time="2024-02-09T09:57:35.090470472Z" level=info msg="CreateContainer within sandbox \"5ef5751dc3d9416fde29d4c3535c2ef3dc6edd74215a5065dfba34969aa0d2c6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e5a599f2a2f0b1102ee7dc34aa6220406105bc681a632f23744afd892f75b06e\"" Feb 9 09:57:35.091187 env[1216]: time="2024-02-09T09:57:35.091104292Z" level=info msg="StartContainer for \"e5a599f2a2f0b1102ee7dc34aa6220406105bc681a632f23744afd892f75b06e\"" Feb 9 09:57:35.159897 env[1216]: time="2024-02-09T09:57:35.159849459Z" level=info msg="StartContainer for \"e5a599f2a2f0b1102ee7dc34aa6220406105bc681a632f23744afd892f75b06e\" returns successfully" Feb 9 09:57:35.178040 env[1216]: time="2024-02-09T09:57:35.177988571Z" level=info msg="shim disconnected" id=e5a599f2a2f0b1102ee7dc34aa6220406105bc681a632f23744afd892f75b06e Feb 9 09:57:35.178040 env[1216]: time="2024-02-09T09:57:35.178028130Z" level=warning msg="cleaning up after shim disconnected" id=e5a599f2a2f0b1102ee7dc34aa6220406105bc681a632f23744afd892f75b06e namespace=k8s.io Feb 9 09:57:35.178040 env[1216]: time="2024-02-09T09:57:35.178037449Z" level=info msg="cleaning up dead shim" Feb 9 09:57:35.185038 env[1216]: time="2024-02-09T09:57:35.184978992Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:57:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2085 runtime=io.containerd.runc.v2\n" Feb 9 09:57:35.869387 kubelet[1518]: E0209 09:57:35.869339 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:36.061987 kubelet[1518]: E0209 09:57:36.061961 1518 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:57:36.066169 env[1216]: time="2024-02-09T09:57:36.066125339Z" level=info msg="CreateContainer within sandbox \"5ef5751dc3d9416fde29d4c3535c2ef3dc6edd74215a5065dfba34969aa0d2c6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 09:57:36.080823 env[1216]: time="2024-02-09T09:57:36.080753186Z" level=info msg="CreateContainer within sandbox \"5ef5751dc3d9416fde29d4c3535c2ef3dc6edd74215a5065dfba34969aa0d2c6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ba2bb65f805b5eafd685fe588d8a633f902a275135927b7eadd6a29e212b3b8e\"" Feb 9 09:57:36.081241 env[1216]: time="2024-02-09T09:57:36.081208932Z" level=info msg="StartContainer for \"ba2bb65f805b5eafd685fe588d8a633f902a275135927b7eadd6a29e212b3b8e\"" Feb 9 09:57:36.135386 env[1216]: time="2024-02-09T09:57:36.135265451Z" level=info msg="StartContainer for \"ba2bb65f805b5eafd685fe588d8a633f902a275135927b7eadd6a29e212b3b8e\" returns successfully" Feb 9 09:57:36.210935 kubelet[1518]: I0209 09:57:36.210896 1518 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 09:57:36.382355 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 09:57:36.625353 kernel: Initializing XFRM netlink socket Feb 9 09:57:36.627380 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 09:57:36.869536 kubelet[1518]: E0209 09:57:36.869475 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:37.066429 kubelet[1518]: E0209 09:57:37.066028 1518 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:57:37.870227 kubelet[1518]: E0209 09:57:37.870173 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:38.067113 kubelet[1518]: E0209 09:57:38.067084 1518 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:57:38.237694 systemd-networkd[1100]: cilium_host: Link UP Feb 9 09:57:38.237808 systemd-networkd[1100]: cilium_net: Link UP Feb 9 09:57:38.238481 systemd-networkd[1100]: cilium_net: Gained carrier Feb 9 09:57:38.239035 systemd-networkd[1100]: cilium_host: Gained carrier Feb 9 09:57:38.239378 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 09:57:38.239469 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 09:57:38.322133 systemd-networkd[1100]: cilium_vxlan: Link UP Feb 9 09:57:38.322140 systemd-networkd[1100]: cilium_vxlan: Gained carrier Feb 9 09:57:38.523478 systemd-networkd[1100]: cilium_host: Gained IPv6LL Feb 9 09:57:38.608360 kernel: NET: Registered PF_ALG protocol family Feb 9 09:57:38.871361 kubelet[1518]: E0209 09:57:38.871278 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:39.068787 kubelet[1518]: E0209 09:57:39.068752 1518 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:57:39.084534 systemd-networkd[1100]: cilium_net: Gained IPv6LL Feb 9 09:57:39.168640 systemd-networkd[1100]: lxc_health: Link UP Feb 9 09:57:39.180429 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 09:57:39.178067 systemd-networkd[1100]: lxc_health: Gained carrier Feb 9 09:57:39.191386 kubelet[1518]: I0209 09:57:39.190661 1518 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-d8f26" podStartSLOduration=-9.223372015664152e+09 pod.CreationTimestamp="2024-02-09 09:57:18 +0000 UTC" firstStartedPulling="2024-02-09 09:57:25.810183908 +0000 UTC m=+20.848386140" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:57:37.081866165 +0000 UTC m=+32.120068397" watchObservedRunningTime="2024-02-09 09:57:39.190622889 +0000 UTC m=+34.228825161" Feb 9 09:57:39.532426 systemd-networkd[1100]: cilium_vxlan: Gained IPv6LL Feb 9 09:57:39.871914 kubelet[1518]: E0209 09:57:39.871805 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:40.070545 kubelet[1518]: E0209 09:57:40.070509 1518 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:57:40.720162 kubelet[1518]: I0209 09:57:40.720108 1518 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:57:40.830729 kubelet[1518]: I0209 09:57:40.830690 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66twq\" (UniqueName: \"kubernetes.io/projected/3df27830-c091-42b0-aa06-bb77ee6ea91d-kube-api-access-66twq\") pod \"nginx-deployment-8ffc5cf85-scjxg\" (UID: \"3df27830-c091-42b0-aa06-bb77ee6ea91d\") " pod="default/nginx-deployment-8ffc5cf85-scjxg" Feb 9 09:57:40.872884 kubelet[1518]: E0209 09:57:40.872858 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:41.023734 env[1216]: time="2024-02-09T09:57:41.023625226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-scjxg,Uid:3df27830-c091-42b0-aa06-bb77ee6ea91d,Namespace:default,Attempt:0,}" Feb 9 09:57:41.057819 systemd-networkd[1100]: lxc75f278de1750: Link UP Feb 9 09:57:41.069345 kernel: eth0: renamed from tmp0bfc9 Feb 9 09:57:41.073045 kubelet[1518]: E0209 09:57:41.073021 1518 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:57:41.079532 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:57:41.079606 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc75f278de1750: link becomes ready Feb 9 09:57:41.079583 systemd-networkd[1100]: lxc75f278de1750: Gained carrier Feb 9 09:57:41.259443 systemd-networkd[1100]: lxc_health: Gained IPv6LL Feb 9 09:57:41.873432 kubelet[1518]: E0209 09:57:41.873377 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:42.073912 kubelet[1518]: E0209 09:57:42.073864 1518 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:57:42.859447 systemd-networkd[1100]: lxc75f278de1750: Gained IPv6LL Feb 9 09:57:42.873693 kubelet[1518]: E0209 09:57:42.873648 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:43.679101 env[1216]: time="2024-02-09T09:57:43.679020829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:57:43.679429 env[1216]: time="2024-02-09T09:57:43.679107627Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:57:43.679429 env[1216]: time="2024-02-09T09:57:43.679144746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:57:43.679429 env[1216]: time="2024-02-09T09:57:43.679315783Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0bfc9e755ff7ea801bf491dcdd6b25223b275c2afdcff15e2533eecf338fcfdf pid=2623 runtime=io.containerd.runc.v2 Feb 9 09:57:43.696541 systemd[1]: run-containerd-runc-k8s.io-0bfc9e755ff7ea801bf491dcdd6b25223b275c2afdcff15e2533eecf338fcfdf-runc.bQvSZb.mount: Deactivated successfully. Feb 9 09:57:43.749388 systemd-resolved[1154]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 09:57:43.767740 env[1216]: time="2024-02-09T09:57:43.767689904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-scjxg,Uid:3df27830-c091-42b0-aa06-bb77ee6ea91d,Namespace:default,Attempt:0,} returns sandbox id \"0bfc9e755ff7ea801bf491dcdd6b25223b275c2afdcff15e2533eecf338fcfdf\"" Feb 9 09:57:43.769161 env[1216]: time="2024-02-09T09:57:43.769128115Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 09:57:43.874358 kubelet[1518]: E0209 09:57:43.874239 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:44.045836 update_engine[1203]: I0209 09:57:44.045794 1203 update_attempter.cc:509] Updating boot flags... Feb 9 09:57:44.875426 kubelet[1518]: E0209 09:57:44.875369 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:45.850857 kubelet[1518]: E0209 09:57:45.850812 1518 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:45.876204 kubelet[1518]: E0209 09:57:45.876158 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:46.042237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2207484890.mount: Deactivated successfully. Feb 9 09:57:46.746877 env[1216]: time="2024-02-09T09:57:46.746830369Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:46.748233 env[1216]: time="2024-02-09T09:57:46.748193745Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:46.749810 env[1216]: time="2024-02-09T09:57:46.749783557Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:46.751913 env[1216]: time="2024-02-09T09:57:46.751889360Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:46.752494 env[1216]: time="2024-02-09T09:57:46.752465630Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\"" Feb 9 09:57:46.754116 env[1216]: time="2024-02-09T09:57:46.754064282Z" level=info msg="CreateContainer within sandbox \"0bfc9e755ff7ea801bf491dcdd6b25223b275c2afdcff15e2533eecf338fcfdf\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 9 09:57:46.762616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1422701770.mount: Deactivated successfully. Feb 9 09:57:46.766408 env[1216]: time="2024-02-09T09:57:46.766364907Z" level=info msg="CreateContainer within sandbox \"0bfc9e755ff7ea801bf491dcdd6b25223b275c2afdcff15e2533eecf338fcfdf\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"727c61bd6c53e1cb2e3b1b881d49e56e1ad1c162a93c87f7adb71911409990d1\"" Feb 9 09:57:46.767020 env[1216]: time="2024-02-09T09:57:46.766992456Z" level=info msg="StartContainer for \"727c61bd6c53e1cb2e3b1b881d49e56e1ad1c162a93c87f7adb71911409990d1\"" Feb 9 09:57:46.824668 env[1216]: time="2024-02-09T09:57:46.824606168Z" level=info msg="StartContainer for \"727c61bd6c53e1cb2e3b1b881d49e56e1ad1c162a93c87f7adb71911409990d1\" returns successfully" Feb 9 09:57:46.877171 kubelet[1518]: E0209 09:57:46.877114 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:47.090376 kubelet[1518]: I0209 09:57:47.090334 1518 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-8ffc5cf85-scjxg" podStartSLOduration=-9.223372029764492e+09 pod.CreationTimestamp="2024-02-09 09:57:40 +0000 UTC" firstStartedPulling="2024-02-09 09:57:43.7688786 +0000 UTC m=+38.807080832" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:57:47.090105955 +0000 UTC m=+42.128308147" watchObservedRunningTime="2024-02-09 09:57:47.090284352 +0000 UTC m=+42.128486584" Feb 9 09:57:47.878201 kubelet[1518]: E0209 09:57:47.878156 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:48.878783 kubelet[1518]: E0209 09:57:48.878741 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:49.879812 kubelet[1518]: E0209 09:57:49.879779 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:50.880732 kubelet[1518]: E0209 09:57:50.880675 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:51.881283 kubelet[1518]: E0209 09:57:51.881240 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:52.304472 kubelet[1518]: I0209 09:57:52.304421 1518 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:57:52.390947 kubelet[1518]: I0209 09:57:52.390898 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wkhs\" (UniqueName: \"kubernetes.io/projected/c36a3371-229c-4b17-b04c-a6b85932fc65-kube-api-access-9wkhs\") pod \"nfs-server-provisioner-0\" (UID: \"c36a3371-229c-4b17-b04c-a6b85932fc65\") " pod="default/nfs-server-provisioner-0" Feb 9 09:57:52.390947 kubelet[1518]: I0209 09:57:52.390956 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/c36a3371-229c-4b17-b04c-a6b85932fc65-data\") pod \"nfs-server-provisioner-0\" (UID: \"c36a3371-229c-4b17-b04c-a6b85932fc65\") " pod="default/nfs-server-provisioner-0" Feb 9 09:57:52.608075 env[1216]: time="2024-02-09T09:57:52.607935824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:c36a3371-229c-4b17-b04c-a6b85932fc65,Namespace:default,Attempt:0,}" Feb 9 09:57:52.632524 systemd-networkd[1100]: lxcbfa4167ff378: Link UP Feb 9 09:57:52.642364 kernel: eth0: renamed from tmpd76fb Feb 9 09:57:52.649896 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:57:52.649989 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcbfa4167ff378: link becomes ready Feb 9 09:57:52.650165 systemd-networkd[1100]: lxcbfa4167ff378: Gained carrier Feb 9 09:57:52.873623 env[1216]: time="2024-02-09T09:57:52.873485918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:57:52.873759 env[1216]: time="2024-02-09T09:57:52.873530597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:57:52.873759 env[1216]: time="2024-02-09T09:57:52.873561557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:57:52.873822 env[1216]: time="2024-02-09T09:57:52.873782394Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d76fbbeb92e531426cc5de6c61b35baff8ac5ca4b7e2ca51e11b797df596a052 pid=2808 runtime=io.containerd.runc.v2 Feb 9 09:57:52.882057 kubelet[1518]: E0209 09:57:52.882003 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:52.908870 systemd-resolved[1154]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 09:57:52.925958 env[1216]: time="2024-02-09T09:57:52.925897506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:c36a3371-229c-4b17-b04c-a6b85932fc65,Namespace:default,Attempt:0,} returns sandbox id \"d76fbbeb92e531426cc5de6c61b35baff8ac5ca4b7e2ca51e11b797df596a052\"" Feb 9 09:57:52.927185 env[1216]: time="2024-02-09T09:57:52.927153329Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 9 09:57:53.677511 systemd-networkd[1100]: lxcbfa4167ff378: Gained IPv6LL Feb 9 09:57:53.882842 kubelet[1518]: E0209 09:57:53.882781 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:54.883712 kubelet[1518]: E0209 09:57:54.883671 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:54.983420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2782990968.mount: Deactivated successfully. Feb 9 09:57:55.884276 kubelet[1518]: E0209 09:57:55.884236 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:56.750441 env[1216]: time="2024-02-09T09:57:56.750385858Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:56.751687 env[1216]: time="2024-02-09T09:57:56.751657284Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:56.753128 env[1216]: time="2024-02-09T09:57:56.753099587Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:56.754846 env[1216]: time="2024-02-09T09:57:56.754823368Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:56.756364 env[1216]: time="2024-02-09T09:57:56.756314632Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Feb 9 09:57:56.758175 env[1216]: time="2024-02-09T09:57:56.758141931Z" level=info msg="CreateContainer within sandbox \"d76fbbeb92e531426cc5de6c61b35baff8ac5ca4b7e2ca51e11b797df596a052\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 9 09:57:56.767153 env[1216]: time="2024-02-09T09:57:56.767116751Z" level=info msg="CreateContainer within sandbox \"d76fbbeb92e531426cc5de6c61b35baff8ac5ca4b7e2ca51e11b797df596a052\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"402b99cb06d3c8e1ec83b48e0e1ef54f36449c684bb3f19a840483190fee64c3\"" Feb 9 09:57:56.767890 env[1216]: time="2024-02-09T09:57:56.767849463Z" level=info msg="StartContainer for \"402b99cb06d3c8e1ec83b48e0e1ef54f36449c684bb3f19a840483190fee64c3\"" Feb 9 09:57:56.823746 env[1216]: time="2024-02-09T09:57:56.823705961Z" level=info msg="StartContainer for \"402b99cb06d3c8e1ec83b48e0e1ef54f36449c684bb3f19a840483190fee64c3\" returns successfully" Feb 9 09:57:56.884849 kubelet[1518]: E0209 09:57:56.884780 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:57.108047 kubelet[1518]: I0209 09:57:57.108017 1518 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=-9.223372031746801e+09 pod.CreationTimestamp="2024-02-09 09:57:52 +0000 UTC" firstStartedPulling="2024-02-09 09:57:52.926937692 +0000 UTC m=+47.965139924" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:57:57.107842763 +0000 UTC m=+52.146044955" watchObservedRunningTime="2024-02-09 09:57:57.107974841 +0000 UTC m=+52.146177073" Feb 9 09:57:57.885789 kubelet[1518]: E0209 09:57:57.885711 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:58.886607 kubelet[1518]: E0209 09:57:58.886545 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:57:59.887314 kubelet[1518]: E0209 09:57:59.887271 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:00.888015 kubelet[1518]: E0209 09:58:00.887953 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:01.888654 kubelet[1518]: E0209 09:58:01.888617 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:02.890061 kubelet[1518]: E0209 09:58:02.890018 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:03.890480 kubelet[1518]: E0209 09:58:03.890446 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:04.891511 kubelet[1518]: E0209 09:58:04.891475 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:05.851005 kubelet[1518]: E0209 09:58:05.850972 1518 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:05.892163 kubelet[1518]: E0209 09:58:05.892122 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:06.164253 kubelet[1518]: I0209 09:58:06.163964 1518 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:58:06.262974 kubelet[1518]: I0209 09:58:06.262926 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msmc4\" (UniqueName: \"kubernetes.io/projected/5d754533-6764-499a-ab7d-5626aed64356-kube-api-access-msmc4\") pod \"test-pod-1\" (UID: \"5d754533-6764-499a-ab7d-5626aed64356\") " pod="default/test-pod-1" Feb 9 09:58:06.262974 kubelet[1518]: I0209 09:58:06.262980 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-773679e6-0f13-4d71-8aa5-9a773e026a5f\" (UniqueName: \"kubernetes.io/nfs/5d754533-6764-499a-ab7d-5626aed64356-pvc-773679e6-0f13-4d71-8aa5-9a773e026a5f\") pod \"test-pod-1\" (UID: \"5d754533-6764-499a-ab7d-5626aed64356\") " pod="default/test-pod-1" Feb 9 09:58:06.385348 kernel: FS-Cache: Loaded Feb 9 09:58:06.408717 kernel: RPC: Registered named UNIX socket transport module. Feb 9 09:58:06.408823 kernel: RPC: Registered udp transport module. Feb 9 09:58:06.408842 kernel: RPC: Registered tcp transport module. Feb 9 09:58:06.409568 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 9 09:58:06.437420 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 9 09:58:06.570357 kernel: NFS: Registering the id_resolver key type Feb 9 09:58:06.570490 kernel: Key type id_resolver registered Feb 9 09:58:06.570512 kernel: Key type id_legacy registered Feb 9 09:58:06.603508 nfsidmap[2954]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 9 09:58:06.608507 nfsidmap[2957]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 9 09:58:06.767032 env[1216]: time="2024-02-09T09:58:06.766969878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:5d754533-6764-499a-ab7d-5626aed64356,Namespace:default,Attempt:0,}" Feb 9 09:58:06.789525 systemd-networkd[1100]: lxc6da4daefddb8: Link UP Feb 9 09:58:06.797787 kernel: eth0: renamed from tmpfcca4 Feb 9 09:58:06.808918 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:58:06.809002 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6da4daefddb8: link becomes ready Feb 9 09:58:06.809036 systemd-networkd[1100]: lxc6da4daefddb8: Gained carrier Feb 9 09:58:06.892835 kubelet[1518]: E0209 09:58:06.892713 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:06.992848 env[1216]: time="2024-02-09T09:58:06.992776356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:58:06.993022 env[1216]: time="2024-02-09T09:58:06.992827115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:58:06.993022 env[1216]: time="2024-02-09T09:58:06.992840155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:58:06.993174 env[1216]: time="2024-02-09T09:58:06.993077713Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fcca4d87d3124e3684a299758147023df09a264c0110c2b01366a20c58e12fd3 pid=2992 runtime=io.containerd.runc.v2 Feb 9 09:58:07.034719 systemd-resolved[1154]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 09:58:07.053269 env[1216]: time="2024-02-09T09:58:07.053216736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:5d754533-6764-499a-ab7d-5626aed64356,Namespace:default,Attempt:0,} returns sandbox id \"fcca4d87d3124e3684a299758147023df09a264c0110c2b01366a20c58e12fd3\"" Feb 9 09:58:07.054722 env[1216]: time="2024-02-09T09:58:07.054685445Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 09:58:07.389023 env[1216]: time="2024-02-09T09:58:07.388965913Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:07.390748 env[1216]: time="2024-02-09T09:58:07.390706500Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:07.392144 env[1216]: time="2024-02-09T09:58:07.392097289Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:07.394704 env[1216]: time="2024-02-09T09:58:07.394661670Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:07.395025 env[1216]: time="2024-02-09T09:58:07.394982667Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\"" Feb 9 09:58:07.397011 env[1216]: time="2024-02-09T09:58:07.396976252Z" level=info msg="CreateContainer within sandbox \"fcca4d87d3124e3684a299758147023df09a264c0110c2b01366a20c58e12fd3\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 9 09:58:07.408055 env[1216]: time="2024-02-09T09:58:07.408004169Z" level=info msg="CreateContainer within sandbox \"fcca4d87d3124e3684a299758147023df09a264c0110c2b01366a20c58e12fd3\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"319235933e11f864533bd6623742b007b60c437776194f35eed8db68b1223d38\"" Feb 9 09:58:07.408628 env[1216]: time="2024-02-09T09:58:07.408581124Z" level=info msg="StartContainer for \"319235933e11f864533bd6623742b007b60c437776194f35eed8db68b1223d38\"" Feb 9 09:58:07.546319 env[1216]: time="2024-02-09T09:58:07.546254322Z" level=info msg="StartContainer for \"319235933e11f864533bd6623742b007b60c437776194f35eed8db68b1223d38\" returns successfully" Feb 9 09:58:07.893506 kubelet[1518]: E0209 09:58:07.893460 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:08.135311 kubelet[1518]: I0209 09:58:08.135212 1518 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=-9.223372020719599e+09 pod.CreationTimestamp="2024-02-09 09:57:52 +0000 UTC" firstStartedPulling="2024-02-09 09:58:07.054376407 +0000 UTC m=+62.092578639" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:58:08.134751532 +0000 UTC m=+63.172953724" watchObservedRunningTime="2024-02-09 09:58:08.135177009 +0000 UTC m=+63.173379241" Feb 9 09:58:08.139473 systemd-networkd[1100]: lxc6da4daefddb8: Gained IPv6LL Feb 9 09:58:08.894512 kubelet[1518]: E0209 09:58:08.894467 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:09.894959 kubelet[1518]: E0209 09:58:09.894918 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:10.895445 kubelet[1518]: E0209 09:58:10.895411 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:11.896616 kubelet[1518]: E0209 09:58:11.896583 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:12.896907 kubelet[1518]: E0209 09:58:12.896867 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:13.897766 kubelet[1518]: E0209 09:58:13.897731 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:14.898646 kubelet[1518]: E0209 09:58:14.898611 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:15.008548 env[1216]: time="2024-02-09T09:58:15.008270909Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 09:58:15.014356 env[1216]: time="2024-02-09T09:58:15.014300432Z" level=info msg="StopContainer for \"ba2bb65f805b5eafd685fe588d8a633f902a275135927b7eadd6a29e212b3b8e\" with timeout 1 (s)" Feb 9 09:58:15.014586 env[1216]: time="2024-02-09T09:58:15.014548070Z" level=info msg="Stop container \"ba2bb65f805b5eafd685fe588d8a633f902a275135927b7eadd6a29e212b3b8e\" with signal terminated" Feb 9 09:58:15.019883 systemd-networkd[1100]: lxc_health: Link DOWN Feb 9 09:58:15.019889 systemd-networkd[1100]: lxc_health: Lost carrier Feb 9 09:58:15.069165 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba2bb65f805b5eafd685fe588d8a633f902a275135927b7eadd6a29e212b3b8e-rootfs.mount: Deactivated successfully. Feb 9 09:58:15.077955 env[1216]: time="2024-02-09T09:58:15.077903278Z" level=info msg="shim disconnected" id=ba2bb65f805b5eafd685fe588d8a633f902a275135927b7eadd6a29e212b3b8e Feb 9 09:58:15.077955 env[1216]: time="2024-02-09T09:58:15.077953838Z" level=warning msg="cleaning up after shim disconnected" id=ba2bb65f805b5eafd685fe588d8a633f902a275135927b7eadd6a29e212b3b8e namespace=k8s.io Feb 9 09:58:15.078129 env[1216]: time="2024-02-09T09:58:15.077963958Z" level=info msg="cleaning up dead shim" Feb 9 09:58:15.085003 env[1216]: time="2024-02-09T09:58:15.084957035Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:58:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3126 runtime=io.containerd.runc.v2\n" Feb 9 09:58:15.086983 env[1216]: time="2024-02-09T09:58:15.086948662Z" level=info msg="StopContainer for \"ba2bb65f805b5eafd685fe588d8a633f902a275135927b7eadd6a29e212b3b8e\" returns successfully" Feb 9 09:58:15.087818 env[1216]: time="2024-02-09T09:58:15.087779817Z" level=info msg="StopPodSandbox for \"5ef5751dc3d9416fde29d4c3535c2ef3dc6edd74215a5065dfba34969aa0d2c6\"" Feb 9 09:58:15.088263 env[1216]: time="2024-02-09T09:58:15.088226815Z" level=info msg="Container to stop \"0e4b6c07018b1c70ba9c6918af5a94f3c2dad3b54c770d245578973dc207c74a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:58:15.088295 env[1216]: time="2024-02-09T09:58:15.088261694Z" level=info msg="Container to stop \"0f346bd014460bb96fdcdf20925fb6b87983f7200920789e081e18a139a3b3c3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:58:15.088295 env[1216]: time="2024-02-09T09:58:15.088275934Z" level=info msg="Container to stop \"56f3ffcea3f49f047ebf745df0a5052a7b232d10f185e2516c7229b2c153c354\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:58:15.088428 env[1216]: time="2024-02-09T09:58:15.088402653Z" level=info msg="Container to stop \"e5a599f2a2f0b1102ee7dc34aa6220406105bc681a632f23744afd892f75b06e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:58:15.088467 env[1216]: time="2024-02-09T09:58:15.088422573Z" level=info msg="Container to stop \"ba2bb65f805b5eafd685fe588d8a633f902a275135927b7eadd6a29e212b3b8e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:58:15.090036 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5ef5751dc3d9416fde29d4c3535c2ef3dc6edd74215a5065dfba34969aa0d2c6-shm.mount: Deactivated successfully. Feb 9 09:58:15.112418 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ef5751dc3d9416fde29d4c3535c2ef3dc6edd74215a5065dfba34969aa0d2c6-rootfs.mount: Deactivated successfully. Feb 9 09:58:15.117699 env[1216]: time="2024-02-09T09:58:15.117643993Z" level=info msg="shim disconnected" id=5ef5751dc3d9416fde29d4c3535c2ef3dc6edd74215a5065dfba34969aa0d2c6 Feb 9 09:58:15.117699 env[1216]: time="2024-02-09T09:58:15.117699272Z" level=warning msg="cleaning up after shim disconnected" id=5ef5751dc3d9416fde29d4c3535c2ef3dc6edd74215a5065dfba34969aa0d2c6 namespace=k8s.io Feb 9 09:58:15.117874 env[1216]: time="2024-02-09T09:58:15.117708672Z" level=info msg="cleaning up dead shim" Feb 9 09:58:15.125600 env[1216]: time="2024-02-09T09:58:15.125551384Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:58:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3159 runtime=io.containerd.runc.v2\n" Feb 9 09:58:15.125886 env[1216]: time="2024-02-09T09:58:15.125858382Z" level=info msg="TearDown network for sandbox \"5ef5751dc3d9416fde29d4c3535c2ef3dc6edd74215a5065dfba34969aa0d2c6\" successfully" Feb 9 09:58:15.125927 env[1216]: time="2024-02-09T09:58:15.125885262Z" level=info msg="StopPodSandbox for \"5ef5751dc3d9416fde29d4c3535c2ef3dc6edd74215a5065dfba34969aa0d2c6\" returns successfully" Feb 9 09:58:15.134607 kubelet[1518]: I0209 09:58:15.134584 1518 scope.go:115] "RemoveContainer" containerID="ba2bb65f805b5eafd685fe588d8a633f902a275135927b7eadd6a29e212b3b8e" Feb 9 09:58:15.135836 env[1216]: time="2024-02-09T09:58:15.135793280Z" level=info msg="RemoveContainer for \"ba2bb65f805b5eafd685fe588d8a633f902a275135927b7eadd6a29e212b3b8e\"" Feb 9 09:58:15.139986 env[1216]: time="2024-02-09T09:58:15.139947055Z" level=info msg="RemoveContainer for \"ba2bb65f805b5eafd685fe588d8a633f902a275135927b7eadd6a29e212b3b8e\" returns successfully" Feb 9 09:58:15.140198 kubelet[1518]: I0209 09:58:15.140167 1518 scope.go:115] "RemoveContainer" containerID="e5a599f2a2f0b1102ee7dc34aa6220406105bc681a632f23744afd892f75b06e" Feb 9 09:58:15.141290 env[1216]: time="2024-02-09T09:58:15.141257527Z" level=info msg="RemoveContainer for \"e5a599f2a2f0b1102ee7dc34aa6220406105bc681a632f23744afd892f75b06e\"" Feb 9 09:58:15.143633 env[1216]: time="2024-02-09T09:58:15.143597552Z" level=info msg="RemoveContainer for \"e5a599f2a2f0b1102ee7dc34aa6220406105bc681a632f23744afd892f75b06e\" returns successfully" Feb 9 09:58:15.143832 kubelet[1518]: I0209 09:58:15.143807 1518 scope.go:115] "RemoveContainer" containerID="56f3ffcea3f49f047ebf745df0a5052a7b232d10f185e2516c7229b2c153c354" Feb 9 09:58:15.144788 env[1216]: time="2024-02-09T09:58:15.144762585Z" level=info msg="RemoveContainer for \"56f3ffcea3f49f047ebf745df0a5052a7b232d10f185e2516c7229b2c153c354\"" Feb 9 09:58:15.147005 env[1216]: time="2024-02-09T09:58:15.146972851Z" level=info msg="RemoveContainer for \"56f3ffcea3f49f047ebf745df0a5052a7b232d10f185e2516c7229b2c153c354\" returns successfully" Feb 9 09:58:15.147154 kubelet[1518]: I0209 09:58:15.147125 1518 scope.go:115] "RemoveContainer" containerID="0f346bd014460bb96fdcdf20925fb6b87983f7200920789e081e18a139a3b3c3" Feb 9 09:58:15.150393 env[1216]: time="2024-02-09T09:58:15.149217957Z" level=info msg="RemoveContainer for \"0f346bd014460bb96fdcdf20925fb6b87983f7200920789e081e18a139a3b3c3\"" Feb 9 09:58:15.151704 env[1216]: time="2024-02-09T09:58:15.151656622Z" level=info msg="RemoveContainer for \"0f346bd014460bb96fdcdf20925fb6b87983f7200920789e081e18a139a3b3c3\" returns successfully" Feb 9 09:58:15.151829 kubelet[1518]: I0209 09:58:15.151810 1518 scope.go:115] "RemoveContainer" containerID="0e4b6c07018b1c70ba9c6918af5a94f3c2dad3b54c770d245578973dc207c74a" Feb 9 09:58:15.152808 env[1216]: time="2024-02-09T09:58:15.152780095Z" level=info msg="RemoveContainer for \"0e4b6c07018b1c70ba9c6918af5a94f3c2dad3b54c770d245578973dc207c74a\"" Feb 9 09:58:15.155016 env[1216]: time="2024-02-09T09:58:15.154982042Z" level=info msg="RemoveContainer for \"0e4b6c07018b1c70ba9c6918af5a94f3c2dad3b54c770d245578973dc207c74a\" returns successfully" Feb 9 09:58:15.155143 kubelet[1518]: I0209 09:58:15.155123 1518 scope.go:115] "RemoveContainer" containerID="ba2bb65f805b5eafd685fe588d8a633f902a275135927b7eadd6a29e212b3b8e" Feb 9 09:58:15.155406 env[1216]: time="2024-02-09T09:58:15.155309440Z" level=error msg="ContainerStatus for \"ba2bb65f805b5eafd685fe588d8a633f902a275135927b7eadd6a29e212b3b8e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ba2bb65f805b5eafd685fe588d8a633f902a275135927b7eadd6a29e212b3b8e\": not found" Feb 9 09:58:15.155516 kubelet[1518]: E0209 09:58:15.155501 1518 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ba2bb65f805b5eafd685fe588d8a633f902a275135927b7eadd6a29e212b3b8e\": not found" containerID="ba2bb65f805b5eafd685fe588d8a633f902a275135927b7eadd6a29e212b3b8e" Feb 9 09:58:15.155555 kubelet[1518]: I0209 09:58:15.155535 1518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:ba2bb65f805b5eafd685fe588d8a633f902a275135927b7eadd6a29e212b3b8e} err="failed to get container status \"ba2bb65f805b5eafd685fe588d8a633f902a275135927b7eadd6a29e212b3b8e\": rpc error: code = NotFound desc = an error occurred when try to find container \"ba2bb65f805b5eafd685fe588d8a633f902a275135927b7eadd6a29e212b3b8e\": not found" Feb 9 09:58:15.155555 kubelet[1518]: I0209 09:58:15.155548 1518 scope.go:115] "RemoveContainer" containerID="e5a599f2a2f0b1102ee7dc34aa6220406105bc681a632f23744afd892f75b06e" Feb 9 09:58:15.155781 env[1216]: time="2024-02-09T09:58:15.155728797Z" level=error msg="ContainerStatus for \"e5a599f2a2f0b1102ee7dc34aa6220406105bc681a632f23744afd892f75b06e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e5a599f2a2f0b1102ee7dc34aa6220406105bc681a632f23744afd892f75b06e\": not found" Feb 9 09:58:15.155984 kubelet[1518]: E0209 09:58:15.155963 1518 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e5a599f2a2f0b1102ee7dc34aa6220406105bc681a632f23744afd892f75b06e\": not found" containerID="e5a599f2a2f0b1102ee7dc34aa6220406105bc681a632f23744afd892f75b06e" Feb 9 09:58:15.156033 kubelet[1518]: I0209 09:58:15.155997 1518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:e5a599f2a2f0b1102ee7dc34aa6220406105bc681a632f23744afd892f75b06e} err="failed to get container status \"e5a599f2a2f0b1102ee7dc34aa6220406105bc681a632f23744afd892f75b06e\": rpc error: code = NotFound desc = an error occurred when try to find container \"e5a599f2a2f0b1102ee7dc34aa6220406105bc681a632f23744afd892f75b06e\": not found" Feb 9 09:58:15.156033 kubelet[1518]: I0209 09:58:15.156010 1518 scope.go:115] "RemoveContainer" containerID="56f3ffcea3f49f047ebf745df0a5052a7b232d10f185e2516c7229b2c153c354" Feb 9 09:58:15.156206 env[1216]: time="2024-02-09T09:58:15.156161954Z" level=error msg="ContainerStatus for \"56f3ffcea3f49f047ebf745df0a5052a7b232d10f185e2516c7229b2c153c354\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"56f3ffcea3f49f047ebf745df0a5052a7b232d10f185e2516c7229b2c153c354\": not found" Feb 9 09:58:15.156305 kubelet[1518]: E0209 09:58:15.156291 1518 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"56f3ffcea3f49f047ebf745df0a5052a7b232d10f185e2516c7229b2c153c354\": not found" containerID="56f3ffcea3f49f047ebf745df0a5052a7b232d10f185e2516c7229b2c153c354" Feb 9 09:58:15.156354 kubelet[1518]: I0209 09:58:15.156330 1518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:56f3ffcea3f49f047ebf745df0a5052a7b232d10f185e2516c7229b2c153c354} err="failed to get container status \"56f3ffcea3f49f047ebf745df0a5052a7b232d10f185e2516c7229b2c153c354\": rpc error: code = NotFound desc = an error occurred when try to find container \"56f3ffcea3f49f047ebf745df0a5052a7b232d10f185e2516c7229b2c153c354\": not found" Feb 9 09:58:15.156419 kubelet[1518]: I0209 09:58:15.156340 1518 scope.go:115] "RemoveContainer" containerID="0f346bd014460bb96fdcdf20925fb6b87983f7200920789e081e18a139a3b3c3" Feb 9 09:58:15.156671 env[1216]: time="2024-02-09T09:58:15.156610472Z" level=error msg="ContainerStatus for \"0f346bd014460bb96fdcdf20925fb6b87983f7200920789e081e18a139a3b3c3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0f346bd014460bb96fdcdf20925fb6b87983f7200920789e081e18a139a3b3c3\": not found" Feb 9 09:58:15.156786 kubelet[1518]: E0209 09:58:15.156773 1518 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0f346bd014460bb96fdcdf20925fb6b87983f7200920789e081e18a139a3b3c3\": not found" containerID="0f346bd014460bb96fdcdf20925fb6b87983f7200920789e081e18a139a3b3c3" Feb 9 09:58:15.156823 kubelet[1518]: I0209 09:58:15.156795 1518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:0f346bd014460bb96fdcdf20925fb6b87983f7200920789e081e18a139a3b3c3} err="failed to get container status \"0f346bd014460bb96fdcdf20925fb6b87983f7200920789e081e18a139a3b3c3\": rpc error: code = NotFound desc = an error occurred when try to find container \"0f346bd014460bb96fdcdf20925fb6b87983f7200920789e081e18a139a3b3c3\": not found" Feb 9 09:58:15.156823 kubelet[1518]: I0209 09:58:15.156804 1518 scope.go:115] "RemoveContainer" containerID="0e4b6c07018b1c70ba9c6918af5a94f3c2dad3b54c770d245578973dc207c74a" Feb 9 09:58:15.157037 env[1216]: time="2024-02-09T09:58:15.156977709Z" level=error msg="ContainerStatus for \"0e4b6c07018b1c70ba9c6918af5a94f3c2dad3b54c770d245578973dc207c74a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0e4b6c07018b1c70ba9c6918af5a94f3c2dad3b54c770d245578973dc207c74a\": not found" Feb 9 09:58:15.157140 kubelet[1518]: E0209 09:58:15.157122 1518 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0e4b6c07018b1c70ba9c6918af5a94f3c2dad3b54c770d245578973dc207c74a\": not found" containerID="0e4b6c07018b1c70ba9c6918af5a94f3c2dad3b54c770d245578973dc207c74a" Feb 9 09:58:15.157175 kubelet[1518]: I0209 09:58:15.157154 1518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:0e4b6c07018b1c70ba9c6918af5a94f3c2dad3b54c770d245578973dc207c74a} err="failed to get container status \"0e4b6c07018b1c70ba9c6918af5a94f3c2dad3b54c770d245578973dc207c74a\": rpc error: code = NotFound desc = an error occurred when try to find container \"0e4b6c07018b1c70ba9c6918af5a94f3c2dad3b54c770d245578973dc207c74a\": not found" Feb 9 09:58:15.206378 kubelet[1518]: I0209 09:58:15.206341 1518 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-hostproc\") pod \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\" (UID: \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\") " Feb 9 09:58:15.206378 kubelet[1518]: I0209 09:58:15.206383 1518 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-cni-path\") pod \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\" (UID: \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\") " Feb 9 09:58:15.206483 kubelet[1518]: I0209 09:58:15.206410 1518 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thqcx\" (UniqueName: \"kubernetes.io/projected/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-kube-api-access-thqcx\") pod \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\" (UID: \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\") " Feb 9 09:58:15.206483 kubelet[1518]: I0209 09:58:15.206436 1518 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-bpf-maps\") pod \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\" (UID: \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\") " Feb 9 09:58:15.206483 kubelet[1518]: I0209 09:58:15.206435 1518 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-hostproc" (OuterVolumeSpecName: "hostproc") pod "e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033" (UID: "e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:58:15.206483 kubelet[1518]: I0209 09:58:15.206458 1518 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-host-proc-sys-net\") pod \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\" (UID: \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\") " Feb 9 09:58:15.206483 kubelet[1518]: I0209 09:58:15.206480 1518 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-cilium-config-path\") pod \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\" (UID: \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\") " Feb 9 09:58:15.206605 kubelet[1518]: I0209 09:58:15.206465 1518 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-cni-path" (OuterVolumeSpecName: "cni-path") pod "e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033" (UID: "e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:58:15.206605 kubelet[1518]: I0209 09:58:15.206498 1518 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-cilium-run\") pod \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\" (UID: \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\") " Feb 9 09:58:15.206605 kubelet[1518]: I0209 09:58:15.206533 1518 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033" (UID: "e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:58:15.206605 kubelet[1518]: I0209 09:58:15.206553 1518 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-xtables-lock\") pod \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\" (UID: \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\") " Feb 9 09:58:15.206605 kubelet[1518]: I0209 09:58:15.206581 1518 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-clustermesh-secrets\") pod \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\" (UID: \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\") " Feb 9 09:58:15.206605 kubelet[1518]: I0209 09:58:15.206604 1518 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-hubble-tls\") pod \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\" (UID: \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\") " Feb 9 09:58:15.206753 kubelet[1518]: I0209 09:58:15.206623 1518 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-cilium-cgroup\") pod \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\" (UID: \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\") " Feb 9 09:58:15.206753 kubelet[1518]: I0209 09:58:15.206641 1518 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-etc-cni-netd\") pod \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\" (UID: \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\") " Feb 9 09:58:15.206753 kubelet[1518]: I0209 09:58:15.206676 1518 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-lib-modules\") pod \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\" (UID: \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\") " Feb 9 09:58:15.206753 kubelet[1518]: I0209 09:58:15.206696 1518 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-host-proc-sys-kernel\") pod \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\" (UID: \"e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033\") " Feb 9 09:58:15.206753 kubelet[1518]: W0209 09:58:15.206706 1518 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 09:58:15.206753 kubelet[1518]: I0209 09:58:15.206729 1518 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-hostproc\") on node \"10.0.0.81\" DevicePath \"\"" Feb 9 09:58:15.206753 kubelet[1518]: I0209 09:58:15.206740 1518 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-cni-path\") on node \"10.0.0.81\" DevicePath \"\"" Feb 9 09:58:15.206904 kubelet[1518]: I0209 09:58:15.206750 1518 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-bpf-maps\") on node \"10.0.0.81\" DevicePath \"\"" Feb 9 09:58:15.206904 kubelet[1518]: I0209 09:58:15.206771 1518 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033" (UID: "e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:58:15.206904 kubelet[1518]: I0209 09:58:15.206521 1518 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033" (UID: "e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:58:15.206904 kubelet[1518]: I0209 09:58:15.206793 1518 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033" (UID: "e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:58:15.206904 kubelet[1518]: I0209 09:58:15.206553 1518 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033" (UID: "e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:58:15.207225 kubelet[1518]: I0209 09:58:15.207143 1518 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033" (UID: "e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:58:15.207225 kubelet[1518]: I0209 09:58:15.207190 1518 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033" (UID: "e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:58:15.207225 kubelet[1518]: I0209 09:58:15.207208 1518 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033" (UID: "e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:58:15.208810 kubelet[1518]: I0209 09:58:15.208514 1518 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033" (UID: "e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 09:58:15.209210 kubelet[1518]: I0209 09:58:15.209163 1518 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-kube-api-access-thqcx" (OuterVolumeSpecName: "kube-api-access-thqcx") pod "e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033" (UID: "e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033"). InnerVolumeSpecName "kube-api-access-thqcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:58:15.210227 kubelet[1518]: I0209 09:58:15.210201 1518 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033" (UID: "e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:58:15.210253 systemd[1]: var-lib-kubelet-pods-e99fe3b6\x2d1bf7\x2d4229\x2db2d3\x2d1efb4d0b7033-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dthqcx.mount: Deactivated successfully. Feb 9 09:58:15.210492 kubelet[1518]: I0209 09:58:15.210445 1518 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033" (UID: "e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:58:15.307783 kubelet[1518]: I0209 09:58:15.307743 1518 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-hubble-tls\") on node \"10.0.0.81\" DevicePath \"\"" Feb 9 09:58:15.307783 kubelet[1518]: I0209 09:58:15.307775 1518 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-cilium-run\") on node \"10.0.0.81\" DevicePath \"\"" Feb 9 09:58:15.307783 kubelet[1518]: I0209 09:58:15.307787 1518 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-xtables-lock\") on node \"10.0.0.81\" DevicePath \"\"" Feb 9 09:58:15.307783 kubelet[1518]: I0209 09:58:15.307797 1518 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-clustermesh-secrets\") on node \"10.0.0.81\" DevicePath \"\"" Feb 9 09:58:15.308000 kubelet[1518]: I0209 09:58:15.307808 1518 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-host-proc-sys-kernel\") on node \"10.0.0.81\" DevicePath \"\"" Feb 9 09:58:15.308000 kubelet[1518]: I0209 09:58:15.307819 1518 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-cilium-cgroup\") on node \"10.0.0.81\" DevicePath \"\"" Feb 9 09:58:15.308000 kubelet[1518]: I0209 09:58:15.307828 1518 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-etc-cni-netd\") on node \"10.0.0.81\" DevicePath \"\"" Feb 9 09:58:15.308000 kubelet[1518]: I0209 09:58:15.307836 1518 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-lib-modules\") on node \"10.0.0.81\" DevicePath \"\"" Feb 9 09:58:15.308000 kubelet[1518]: I0209 09:58:15.307847 1518 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-thqcx\" (UniqueName: \"kubernetes.io/projected/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-kube-api-access-thqcx\") on node \"10.0.0.81\" DevicePath \"\"" Feb 9 09:58:15.308000 kubelet[1518]: I0209 09:58:15.307856 1518 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-host-proc-sys-net\") on node \"10.0.0.81\" DevicePath \"\"" Feb 9 09:58:15.308000 kubelet[1518]: I0209 09:58:15.307864 1518 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033-cilium-config-path\") on node \"10.0.0.81\" DevicePath \"\"" Feb 9 09:58:15.899339 kubelet[1518]: E0209 09:58:15.899288 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:15.916257 kubelet[1518]: E0209 09:58:15.916234 1518 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 09:58:15.982051 systemd[1]: var-lib-kubelet-pods-e99fe3b6\x2d1bf7\x2d4229\x2db2d3\x2d1efb4d0b7033-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 09:58:15.982212 systemd[1]: var-lib-kubelet-pods-e99fe3b6\x2d1bf7\x2d4229\x2db2d3\x2d1efb4d0b7033-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 09:58:16.012500 kubelet[1518]: I0209 09:58:16.012462 1518 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033 path="/var/lib/kubelet/pods/e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033/volumes" Feb 9 09:58:16.899756 kubelet[1518]: E0209 09:58:16.899716 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:17.719269 kubelet[1518]: I0209 09:58:17.719220 1518 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:58:17.719269 kubelet[1518]: E0209 09:58:17.719269 1518 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033" containerName="mount-bpf-fs" Feb 9 09:58:17.719269 kubelet[1518]: E0209 09:58:17.719281 1518 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033" containerName="clean-cilium-state" Feb 9 09:58:17.719500 kubelet[1518]: E0209 09:58:17.719288 1518 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033" containerName="cilium-agent" Feb 9 09:58:17.719500 kubelet[1518]: E0209 09:58:17.719297 1518 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033" containerName="mount-cgroup" Feb 9 09:58:17.719500 kubelet[1518]: E0209 09:58:17.719302 1518 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033" containerName="apply-sysctl-overwrites" Feb 9 09:58:17.719500 kubelet[1518]: I0209 09:58:17.719344 1518 memory_manager.go:346] "RemoveStaleState removing state" podUID="e99fe3b6-1bf7-4229-b2d3-1efb4d0b7033" containerName="cilium-agent" Feb 9 09:58:17.737791 kubelet[1518]: I0209 09:58:17.737716 1518 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:58:17.818051 kubelet[1518]: I0209 09:58:17.818018 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4ddab2ea-9327-493a-8f90-1a7de5e50640-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-xbxht\" (UID: \"4ddab2ea-9327-493a-8f90-1a7de5e50640\") " pod="kube-system/cilium-operator-f59cbd8c6-xbxht" Feb 9 09:58:17.818271 kubelet[1518]: I0209 09:58:17.818256 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e561f672-ac0a-434f-8954-fdb4195729c9-host-proc-sys-kernel\") pod \"cilium-ck2dh\" (UID: \"e561f672-ac0a-434f-8954-fdb4195729c9\") " pod="kube-system/cilium-ck2dh" Feb 9 09:58:17.818434 kubelet[1518]: I0209 09:58:17.818395 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e561f672-ac0a-434f-8954-fdb4195729c9-hubble-tls\") pod \"cilium-ck2dh\" (UID: \"e561f672-ac0a-434f-8954-fdb4195729c9\") " pod="kube-system/cilium-ck2dh" Feb 9 09:58:17.818483 kubelet[1518]: I0209 09:58:17.818444 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e561f672-ac0a-434f-8954-fdb4195729c9-xtables-lock\") pod \"cilium-ck2dh\" (UID: \"e561f672-ac0a-434f-8954-fdb4195729c9\") " pod="kube-system/cilium-ck2dh" Feb 9 09:58:17.818483 kubelet[1518]: I0209 09:58:17.818475 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e561f672-ac0a-434f-8954-fdb4195729c9-cilium-ipsec-secrets\") pod \"cilium-ck2dh\" (UID: \"e561f672-ac0a-434f-8954-fdb4195729c9\") " pod="kube-system/cilium-ck2dh" Feb 9 09:58:17.818537 kubelet[1518]: I0209 09:58:17.818501 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e561f672-ac0a-434f-8954-fdb4195729c9-host-proc-sys-net\") pod \"cilium-ck2dh\" (UID: \"e561f672-ac0a-434f-8954-fdb4195729c9\") " pod="kube-system/cilium-ck2dh" Feb 9 09:58:17.818537 kubelet[1518]: I0209 09:58:17.818534 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9wvn\" (UniqueName: \"kubernetes.io/projected/4ddab2ea-9327-493a-8f90-1a7de5e50640-kube-api-access-x9wvn\") pod \"cilium-operator-f59cbd8c6-xbxht\" (UID: \"4ddab2ea-9327-493a-8f90-1a7de5e50640\") " pod="kube-system/cilium-operator-f59cbd8c6-xbxht" Feb 9 09:58:17.818592 kubelet[1518]: I0209 09:58:17.818561 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e561f672-ac0a-434f-8954-fdb4195729c9-cni-path\") pod \"cilium-ck2dh\" (UID: \"e561f672-ac0a-434f-8954-fdb4195729c9\") " pod="kube-system/cilium-ck2dh" Feb 9 09:58:17.818592 kubelet[1518]: I0209 09:58:17.818581 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e561f672-ac0a-434f-8954-fdb4195729c9-lib-modules\") pod \"cilium-ck2dh\" (UID: \"e561f672-ac0a-434f-8954-fdb4195729c9\") " pod="kube-system/cilium-ck2dh" Feb 9 09:58:17.818647 kubelet[1518]: I0209 09:58:17.818611 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27x6k\" (UniqueName: \"kubernetes.io/projected/e561f672-ac0a-434f-8954-fdb4195729c9-kube-api-access-27x6k\") pod \"cilium-ck2dh\" (UID: \"e561f672-ac0a-434f-8954-fdb4195729c9\") " pod="kube-system/cilium-ck2dh" Feb 9 09:58:17.818676 kubelet[1518]: I0209 09:58:17.818646 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e561f672-ac0a-434f-8954-fdb4195729c9-cilium-run\") pod \"cilium-ck2dh\" (UID: \"e561f672-ac0a-434f-8954-fdb4195729c9\") " pod="kube-system/cilium-ck2dh" Feb 9 09:58:17.818676 kubelet[1518]: I0209 09:58:17.818667 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e561f672-ac0a-434f-8954-fdb4195729c9-bpf-maps\") pod \"cilium-ck2dh\" (UID: \"e561f672-ac0a-434f-8954-fdb4195729c9\") " pod="kube-system/cilium-ck2dh" Feb 9 09:58:17.818723 kubelet[1518]: I0209 09:58:17.818690 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e561f672-ac0a-434f-8954-fdb4195729c9-hostproc\") pod \"cilium-ck2dh\" (UID: \"e561f672-ac0a-434f-8954-fdb4195729c9\") " pod="kube-system/cilium-ck2dh" Feb 9 09:58:17.818723 kubelet[1518]: I0209 09:58:17.818714 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e561f672-ac0a-434f-8954-fdb4195729c9-clustermesh-secrets\") pod \"cilium-ck2dh\" (UID: \"e561f672-ac0a-434f-8954-fdb4195729c9\") " pod="kube-system/cilium-ck2dh" Feb 9 09:58:17.818769 kubelet[1518]: I0209 09:58:17.818752 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e561f672-ac0a-434f-8954-fdb4195729c9-cilium-cgroup\") pod \"cilium-ck2dh\" (UID: \"e561f672-ac0a-434f-8954-fdb4195729c9\") " pod="kube-system/cilium-ck2dh" Feb 9 09:58:17.818794 kubelet[1518]: I0209 09:58:17.818772 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e561f672-ac0a-434f-8954-fdb4195729c9-etc-cni-netd\") pod \"cilium-ck2dh\" (UID: \"e561f672-ac0a-434f-8954-fdb4195729c9\") " pod="kube-system/cilium-ck2dh" Feb 9 09:58:17.818820 kubelet[1518]: I0209 09:58:17.818807 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e561f672-ac0a-434f-8954-fdb4195729c9-cilium-config-path\") pod \"cilium-ck2dh\" (UID: \"e561f672-ac0a-434f-8954-fdb4195729c9\") " pod="kube-system/cilium-ck2dh" Feb 9 09:58:17.900453 kubelet[1518]: E0209 09:58:17.900389 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:18.022561 kubelet[1518]: E0209 09:58:18.022459 1518 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:58:18.024084 env[1216]: time="2024-02-09T09:58:18.024043806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ck2dh,Uid:e561f672-ac0a-434f-8954-fdb4195729c9,Namespace:kube-system,Attempt:0,}" Feb 9 09:58:18.035305 env[1216]: time="2024-02-09T09:58:18.035238821Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:58:18.035467 env[1216]: time="2024-02-09T09:58:18.035443739Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:58:18.035594 env[1216]: time="2024-02-09T09:58:18.035563499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:58:18.035966 env[1216]: time="2024-02-09T09:58:18.035930337Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f6fdd05bacdf1398bd12ca953ad151bf0c29626224b9555731728ab970146ef7 pid=3189 runtime=io.containerd.runc.v2 Feb 9 09:58:18.040972 kubelet[1518]: E0209 09:58:18.040902 1518 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:58:18.041813 env[1216]: time="2024-02-09T09:58:18.041774423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-xbxht,Uid:4ddab2ea-9327-493a-8f90-1a7de5e50640,Namespace:kube-system,Attempt:0,}" Feb 9 09:58:18.060342 env[1216]: time="2024-02-09T09:58:18.059393480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:58:18.060342 env[1216]: time="2024-02-09T09:58:18.059446600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:58:18.060342 env[1216]: time="2024-02-09T09:58:18.059457000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:58:18.060342 env[1216]: time="2024-02-09T09:58:18.059615999Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e096ab9ef92dc6de8415cf3b33a307769567e9b43b6bef8d4a50413eafc7e92 pid=3218 runtime=io.containerd.runc.v2 Feb 9 09:58:18.086691 env[1216]: time="2024-02-09T09:58:18.086647081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ck2dh,Uid:e561f672-ac0a-434f-8954-fdb4195729c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6fdd05bacdf1398bd12ca953ad151bf0c29626224b9555731728ab970146ef7\"" Feb 9 09:58:18.087895 kubelet[1518]: E0209 09:58:18.087430 1518 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:58:18.089696 env[1216]: time="2024-02-09T09:58:18.089533625Z" level=info msg="CreateContainer within sandbox \"f6fdd05bacdf1398bd12ca953ad151bf0c29626224b9555731728ab970146ef7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 09:58:18.101812 env[1216]: time="2024-02-09T09:58:18.101765913Z" level=info msg="CreateContainer within sandbox \"f6fdd05bacdf1398bd12ca953ad151bf0c29626224b9555731728ab970146ef7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b7bf01e1756431b0be74c9eef5cc45ba806912eb21d100790c4d77d2c17e2d00\"" Feb 9 09:58:18.102664 env[1216]: time="2024-02-09T09:58:18.102625628Z" level=info msg="StartContainer for \"b7bf01e1756431b0be74c9eef5cc45ba806912eb21d100790c4d77d2c17e2d00\"" Feb 9 09:58:18.115864 env[1216]: time="2024-02-09T09:58:18.115822032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-xbxht,Uid:4ddab2ea-9327-493a-8f90-1a7de5e50640,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e096ab9ef92dc6de8415cf3b33a307769567e9b43b6bef8d4a50413eafc7e92\"" Feb 9 09:58:18.116709 kubelet[1518]: E0209 09:58:18.116686 1518 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:58:18.117557 env[1216]: time="2024-02-09T09:58:18.117527422Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 09:58:18.159188 env[1216]: time="2024-02-09T09:58:18.159141139Z" level=info msg="StartContainer for \"b7bf01e1756431b0be74c9eef5cc45ba806912eb21d100790c4d77d2c17e2d00\" returns successfully" Feb 9 09:58:18.191508 env[1216]: time="2024-02-09T09:58:18.191464231Z" level=info msg="shim disconnected" id=b7bf01e1756431b0be74c9eef5cc45ba806912eb21d100790c4d77d2c17e2d00 Feb 9 09:58:18.191857 env[1216]: time="2024-02-09T09:58:18.191837269Z" level=warning msg="cleaning up after shim disconnected" id=b7bf01e1756431b0be74c9eef5cc45ba806912eb21d100790c4d77d2c17e2d00 namespace=k8s.io Feb 9 09:58:18.191962 env[1216]: time="2024-02-09T09:58:18.191947308Z" level=info msg="cleaning up dead shim" Feb 9 09:58:18.199713 env[1216]: time="2024-02-09T09:58:18.199675423Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:58:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3313 runtime=io.containerd.runc.v2\n" Feb 9 09:58:18.901121 kubelet[1518]: E0209 09:58:18.901070 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:18.944869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount806801330.mount: Deactivated successfully. Feb 9 09:58:19.145945 env[1216]: time="2024-02-09T09:58:19.145908291Z" level=info msg="StopPodSandbox for \"f6fdd05bacdf1398bd12ca953ad151bf0c29626224b9555731728ab970146ef7\"" Feb 9 09:58:19.146372 env[1216]: time="2024-02-09T09:58:19.146315368Z" level=info msg="Container to stop \"b7bf01e1756431b0be74c9eef5cc45ba806912eb21d100790c4d77d2c17e2d00\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:58:19.148090 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f6fdd05bacdf1398bd12ca953ad151bf0c29626224b9555731728ab970146ef7-shm.mount: Deactivated successfully. Feb 9 09:58:19.168457 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6fdd05bacdf1398bd12ca953ad151bf0c29626224b9555731728ab970146ef7-rootfs.mount: Deactivated successfully. Feb 9 09:58:19.199640 env[1216]: time="2024-02-09T09:58:19.199584184Z" level=info msg="shim disconnected" id=f6fdd05bacdf1398bd12ca953ad151bf0c29626224b9555731728ab970146ef7 Feb 9 09:58:19.199866 env[1216]: time="2024-02-09T09:58:19.199847102Z" level=warning msg="cleaning up after shim disconnected" id=f6fdd05bacdf1398bd12ca953ad151bf0c29626224b9555731728ab970146ef7 namespace=k8s.io Feb 9 09:58:19.199924 env[1216]: time="2024-02-09T09:58:19.199912262Z" level=info msg="cleaning up dead shim" Feb 9 09:58:19.206647 env[1216]: time="2024-02-09T09:58:19.206604544Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:58:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3346 runtime=io.containerd.runc.v2\n" Feb 9 09:58:19.207030 env[1216]: time="2024-02-09T09:58:19.207005261Z" level=info msg="TearDown network for sandbox \"f6fdd05bacdf1398bd12ca953ad151bf0c29626224b9555731728ab970146ef7\" successfully" Feb 9 09:58:19.207132 env[1216]: time="2024-02-09T09:58:19.207115141Z" level=info msg="StopPodSandbox for \"f6fdd05bacdf1398bd12ca953ad151bf0c29626224b9555731728ab970146ef7\" returns successfully" Feb 9 09:58:19.326115 kubelet[1518]: I0209 09:58:19.326067 1518 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e561f672-ac0a-434f-8954-fdb4195729c9-xtables-lock\") pod \"e561f672-ac0a-434f-8954-fdb4195729c9\" (UID: \"e561f672-ac0a-434f-8954-fdb4195729c9\") " Feb 9 09:58:19.326115 kubelet[1518]: I0209 09:58:19.326121 1518 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e561f672-ac0a-434f-8954-fdb4195729c9-clustermesh-secrets\") pod \"e561f672-ac0a-434f-8954-fdb4195729c9\" (UID: \"e561f672-ac0a-434f-8954-fdb4195729c9\") " Feb 9 09:58:19.326307 kubelet[1518]: I0209 09:58:19.326144 1518 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e561f672-ac0a-434f-8954-fdb4195729c9-host-proc-sys-kernel\") pod \"e561f672-ac0a-434f-8954-fdb4195729c9\" (UID: \"e561f672-ac0a-434f-8954-fdb4195729c9\") " Feb 9 09:58:19.326307 kubelet[1518]: I0209 09:58:19.326166 1518 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e561f672-ac0a-434f-8954-fdb4195729c9-hubble-tls\") pod \"e561f672-ac0a-434f-8954-fdb4195729c9\" (UID: \"e561f672-ac0a-434f-8954-fdb4195729c9\") " Feb 9 09:58:19.326307 kubelet[1518]: I0209 09:58:19.326191 1518 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e561f672-ac0a-434f-8954-fdb4195729c9-cilium-ipsec-secrets\") pod \"e561f672-ac0a-434f-8954-fdb4195729c9\" (UID: \"e561f672-ac0a-434f-8954-fdb4195729c9\") " Feb 9 09:58:19.326307 kubelet[1518]: I0209 09:58:19.326207 1518 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e561f672-ac0a-434f-8954-fdb4195729c9-bpf-maps\") pod \"e561f672-ac0a-434f-8954-fdb4195729c9\" (UID: \"e561f672-ac0a-434f-8954-fdb4195729c9\") " Feb 9 09:58:19.326307 kubelet[1518]: I0209 09:58:19.326226 1518 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e561f672-ac0a-434f-8954-fdb4195729c9-cni-path\") pod \"e561f672-ac0a-434f-8954-fdb4195729c9\" (UID: \"e561f672-ac0a-434f-8954-fdb4195729c9\") " Feb 9 09:58:19.326307 kubelet[1518]: I0209 09:58:19.326246 1518 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e561f672-ac0a-434f-8954-fdb4195729c9-cilium-config-path\") pod \"e561f672-ac0a-434f-8954-fdb4195729c9\" (UID: \"e561f672-ac0a-434f-8954-fdb4195729c9\") " Feb 9 09:58:19.328823 kubelet[1518]: I0209 09:58:19.326263 1518 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e561f672-ac0a-434f-8954-fdb4195729c9-cilium-run\") pod \"e561f672-ac0a-434f-8954-fdb4195729c9\" (UID: \"e561f672-ac0a-434f-8954-fdb4195729c9\") " Feb 9 09:58:19.328823 kubelet[1518]: I0209 09:58:19.326280 1518 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e561f672-ac0a-434f-8954-fdb4195729c9-cilium-cgroup\") pod \"e561f672-ac0a-434f-8954-fdb4195729c9\" (UID: \"e561f672-ac0a-434f-8954-fdb4195729c9\") " Feb 9 09:58:19.328823 kubelet[1518]: I0209 09:58:19.326299 1518 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e561f672-ac0a-434f-8954-fdb4195729c9-etc-cni-netd\") pod \"e561f672-ac0a-434f-8954-fdb4195729c9\" (UID: \"e561f672-ac0a-434f-8954-fdb4195729c9\") " Feb 9 09:58:19.328823 kubelet[1518]: I0209 09:58:19.326317 1518 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e561f672-ac0a-434f-8954-fdb4195729c9-host-proc-sys-net\") pod \"e561f672-ac0a-434f-8954-fdb4195729c9\" (UID: \"e561f672-ac0a-434f-8954-fdb4195729c9\") " Feb 9 09:58:19.328823 kubelet[1518]: I0209 09:58:19.326357 1518 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e561f672-ac0a-434f-8954-fdb4195729c9-lib-modules\") pod \"e561f672-ac0a-434f-8954-fdb4195729c9\" (UID: \"e561f672-ac0a-434f-8954-fdb4195729c9\") " Feb 9 09:58:19.328823 kubelet[1518]: I0209 09:58:19.326378 1518 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e561f672-ac0a-434f-8954-fdb4195729c9-hostproc\") pod \"e561f672-ac0a-434f-8954-fdb4195729c9\" (UID: \"e561f672-ac0a-434f-8954-fdb4195729c9\") " Feb 9 09:58:19.328969 kubelet[1518]: I0209 09:58:19.326397 1518 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27x6k\" (UniqueName: \"kubernetes.io/projected/e561f672-ac0a-434f-8954-fdb4195729c9-kube-api-access-27x6k\") pod \"e561f672-ac0a-434f-8954-fdb4195729c9\" (UID: \"e561f672-ac0a-434f-8954-fdb4195729c9\") " Feb 9 09:58:19.328969 kubelet[1518]: I0209 09:58:19.326402 1518 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e561f672-ac0a-434f-8954-fdb4195729c9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e561f672-ac0a-434f-8954-fdb4195729c9" (UID: "e561f672-ac0a-434f-8954-fdb4195729c9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:58:19.328969 kubelet[1518]: I0209 09:58:19.326445 1518 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e561f672-ac0a-434f-8954-fdb4195729c9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e561f672-ac0a-434f-8954-fdb4195729c9" (UID: "e561f672-ac0a-434f-8954-fdb4195729c9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:58:19.328969 kubelet[1518]: I0209 09:58:19.326707 1518 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e561f672-ac0a-434f-8954-fdb4195729c9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e561f672-ac0a-434f-8954-fdb4195729c9" (UID: "e561f672-ac0a-434f-8954-fdb4195729c9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:58:19.328969 kubelet[1518]: I0209 09:58:19.326711 1518 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e561f672-ac0a-434f-8954-fdb4195729c9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e561f672-ac0a-434f-8954-fdb4195729c9" (UID: "e561f672-ac0a-434f-8954-fdb4195729c9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:58:19.329090 kubelet[1518]: I0209 09:58:19.326739 1518 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e561f672-ac0a-434f-8954-fdb4195729c9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e561f672-ac0a-434f-8954-fdb4195729c9" (UID: "e561f672-ac0a-434f-8954-fdb4195729c9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:58:19.329090 kubelet[1518]: I0209 09:58:19.326758 1518 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e561f672-ac0a-434f-8954-fdb4195729c9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e561f672-ac0a-434f-8954-fdb4195729c9" (UID: "e561f672-ac0a-434f-8954-fdb4195729c9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:58:19.329090 kubelet[1518]: I0209 09:58:19.326762 1518 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e561f672-ac0a-434f-8954-fdb4195729c9-cni-path" (OuterVolumeSpecName: "cni-path") pod "e561f672-ac0a-434f-8954-fdb4195729c9" (UID: "e561f672-ac0a-434f-8954-fdb4195729c9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:58:19.329090 kubelet[1518]: I0209 09:58:19.326776 1518 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e561f672-ac0a-434f-8954-fdb4195729c9-hostproc" (OuterVolumeSpecName: "hostproc") pod "e561f672-ac0a-434f-8954-fdb4195729c9" (UID: "e561f672-ac0a-434f-8954-fdb4195729c9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:58:19.329090 kubelet[1518]: I0209 09:58:19.326795 1518 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e561f672-ac0a-434f-8954-fdb4195729c9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e561f672-ac0a-434f-8954-fdb4195729c9" (UID: "e561f672-ac0a-434f-8954-fdb4195729c9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:58:19.329200 kubelet[1518]: W0209 09:58:19.326892 1518 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/e561f672-ac0a-434f-8954-fdb4195729c9/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 09:58:19.329200 kubelet[1518]: I0209 09:58:19.327178 1518 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e561f672-ac0a-434f-8954-fdb4195729c9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e561f672-ac0a-434f-8954-fdb4195729c9" (UID: "e561f672-ac0a-434f-8954-fdb4195729c9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:58:19.329200 kubelet[1518]: I0209 09:58:19.328803 1518 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e561f672-ac0a-434f-8954-fdb4195729c9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e561f672-ac0a-434f-8954-fdb4195729c9" (UID: "e561f672-ac0a-434f-8954-fdb4195729c9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 09:58:19.329761 kubelet[1518]: I0209 09:58:19.329732 1518 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e561f672-ac0a-434f-8954-fdb4195729c9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e561f672-ac0a-434f-8954-fdb4195729c9" (UID: "e561f672-ac0a-434f-8954-fdb4195729c9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:58:19.329968 kubelet[1518]: I0209 09:58:19.329922 1518 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e561f672-ac0a-434f-8954-fdb4195729c9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e561f672-ac0a-434f-8954-fdb4195729c9" (UID: "e561f672-ac0a-434f-8954-fdb4195729c9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:58:19.331383 kubelet[1518]: I0209 09:58:19.331357 1518 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e561f672-ac0a-434f-8954-fdb4195729c9-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "e561f672-ac0a-434f-8954-fdb4195729c9" (UID: "e561f672-ac0a-434f-8954-fdb4195729c9"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:58:19.333253 kubelet[1518]: I0209 09:58:19.333216 1518 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e561f672-ac0a-434f-8954-fdb4195729c9-kube-api-access-27x6k" (OuterVolumeSpecName: "kube-api-access-27x6k") pod "e561f672-ac0a-434f-8954-fdb4195729c9" (UID: "e561f672-ac0a-434f-8954-fdb4195729c9"). InnerVolumeSpecName "kube-api-access-27x6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:58:19.406093 env[1216]: time="2024-02-09T09:58:19.406037804Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:19.407232 env[1216]: time="2024-02-09T09:58:19.407202477Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:19.408986 env[1216]: time="2024-02-09T09:58:19.408948867Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:19.409434 env[1216]: time="2024-02-09T09:58:19.409411785Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 9 09:58:19.410971 env[1216]: time="2024-02-09T09:58:19.410936176Z" level=info msg="CreateContainer within sandbox \"7e096ab9ef92dc6de8415cf3b33a307769567e9b43b6bef8d4a50413eafc7e92\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 09:58:19.417486 env[1216]: time="2024-02-09T09:58:19.417452219Z" level=info msg="CreateContainer within sandbox \"7e096ab9ef92dc6de8415cf3b33a307769567e9b43b6bef8d4a50413eafc7e92\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6829521406d89c235fd52c40b4c854c8b3aece5129eb842692eda085e7c9bdc2\"" Feb 9 09:58:19.417957 env[1216]: time="2024-02-09T09:58:19.417923216Z" level=info msg="StartContainer for \"6829521406d89c235fd52c40b4c854c8b3aece5129eb842692eda085e7c9bdc2\"" Feb 9 09:58:19.426994 kubelet[1518]: I0209 09:58:19.426897 1518 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e561f672-ac0a-434f-8954-fdb4195729c9-cilium-run\") on node \"10.0.0.81\" DevicePath \"\"" Feb 9 09:58:19.426994 kubelet[1518]: I0209 09:58:19.426930 1518 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e561f672-ac0a-434f-8954-fdb4195729c9-cilium-cgroup\") on node \"10.0.0.81\" DevicePath \"\"" Feb 9 09:58:19.426994 kubelet[1518]: I0209 09:58:19.426941 1518 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e561f672-ac0a-434f-8954-fdb4195729c9-etc-cni-netd\") on node \"10.0.0.81\" DevicePath \"\"" Feb 9 09:58:19.426994 kubelet[1518]: I0209 09:58:19.426950 1518 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e561f672-ac0a-434f-8954-fdb4195729c9-cilium-config-path\") on node \"10.0.0.81\" DevicePath \"\"" Feb 9 09:58:19.426994 kubelet[1518]: I0209 09:58:19.426962 1518 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e561f672-ac0a-434f-8954-fdb4195729c9-lib-modules\") on node \"10.0.0.81\" DevicePath \"\"" Feb 9 09:58:19.426994 kubelet[1518]: I0209 09:58:19.426971 1518 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e561f672-ac0a-434f-8954-fdb4195729c9-hostproc\") on node \"10.0.0.81\" DevicePath \"\"" Feb 9 09:58:19.426994 kubelet[1518]: I0209 09:58:19.426981 1518 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-27x6k\" (UniqueName: \"kubernetes.io/projected/e561f672-ac0a-434f-8954-fdb4195729c9-kube-api-access-27x6k\") on node \"10.0.0.81\" DevicePath \"\"" Feb 9 09:58:19.427293 kubelet[1518]: I0209 09:58:19.427011 1518 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e561f672-ac0a-434f-8954-fdb4195729c9-host-proc-sys-net\") on node \"10.0.0.81\" DevicePath \"\"" Feb 9 09:58:19.427293 kubelet[1518]: I0209 09:58:19.427020 1518 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e561f672-ac0a-434f-8954-fdb4195729c9-host-proc-sys-kernel\") on node \"10.0.0.81\" DevicePath \"\"" Feb 9 09:58:19.427293 kubelet[1518]: I0209 09:58:19.427029 1518 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e561f672-ac0a-434f-8954-fdb4195729c9-hubble-tls\") on node \"10.0.0.81\" DevicePath \"\"" Feb 9 09:58:19.427293 kubelet[1518]: I0209 09:58:19.427038 1518 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e561f672-ac0a-434f-8954-fdb4195729c9-xtables-lock\") on node \"10.0.0.81\" DevicePath \"\"" Feb 9 09:58:19.427293 kubelet[1518]: I0209 09:58:19.427046 1518 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e561f672-ac0a-434f-8954-fdb4195729c9-clustermesh-secrets\") on node \"10.0.0.81\" DevicePath \"\"" Feb 9 09:58:19.427293 kubelet[1518]: I0209 09:58:19.427055 1518 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e561f672-ac0a-434f-8954-fdb4195729c9-bpf-maps\") on node \"10.0.0.81\" DevicePath \"\"" Feb 9 09:58:19.427293 kubelet[1518]: I0209 09:58:19.427065 1518 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e561f672-ac0a-434f-8954-fdb4195729c9-cilium-ipsec-secrets\") on node \"10.0.0.81\" DevicePath \"\"" Feb 9 09:58:19.427293 kubelet[1518]: I0209 09:58:19.427074 1518 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e561f672-ac0a-434f-8954-fdb4195729c9-cni-path\") on node \"10.0.0.81\" DevicePath \"\"" Feb 9 09:58:19.495504 env[1216]: time="2024-02-09T09:58:19.495459173Z" level=info msg="StartContainer for \"6829521406d89c235fd52c40b4c854c8b3aece5129eb842692eda085e7c9bdc2\" returns successfully" Feb 9 09:58:19.835742 kubelet[1518]: I0209 09:58:19.835719 1518 setters.go:548] "Node became not ready" node="10.0.0.81" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 09:58:19.835665748 +0000 UTC m=+74.873867980 LastTransitionTime:2024-02-09 09:58:19.835665748 +0000 UTC m=+74.873867980 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 09:58:19.901945 kubelet[1518]: E0209 09:58:19.901898 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:19.925351 systemd[1]: var-lib-kubelet-pods-e561f672\x2dac0a\x2d434f\x2d8954\x2dfdb4195729c9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d27x6k.mount: Deactivated successfully. Feb 9 09:58:19.925502 systemd[1]: var-lib-kubelet-pods-e561f672\x2dac0a\x2d434f\x2d8954\x2dfdb4195729c9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 09:58:19.925629 systemd[1]: var-lib-kubelet-pods-e561f672\x2dac0a\x2d434f\x2d8954\x2dfdb4195729c9-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 09:58:19.925722 systemd[1]: var-lib-kubelet-pods-e561f672\x2dac0a\x2d434f\x2d8954\x2dfdb4195729c9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 09:58:20.149297 kubelet[1518]: E0209 09:58:20.149206 1518 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:58:20.151371 kubelet[1518]: I0209 09:58:20.151065 1518 scope.go:115] "RemoveContainer" containerID="b7bf01e1756431b0be74c9eef5cc45ba806912eb21d100790c4d77d2c17e2d00" Feb 9 09:58:20.152565 env[1216]: time="2024-02-09T09:58:20.152517952Z" level=info msg="RemoveContainer for \"b7bf01e1756431b0be74c9eef5cc45ba806912eb21d100790c4d77d2c17e2d00\"" Feb 9 09:58:20.156530 env[1216]: time="2024-02-09T09:58:20.156380291Z" level=info msg="RemoveContainer for \"b7bf01e1756431b0be74c9eef5cc45ba806912eb21d100790c4d77d2c17e2d00\" returns successfully" Feb 9 09:58:20.156644 kubelet[1518]: I0209 09:58:20.156421 1518 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-xbxht" podStartSLOduration=-9.223372033698397e+09 pod.CreationTimestamp="2024-02-09 09:58:17 +0000 UTC" firstStartedPulling="2024-02-09 09:58:18.117263823 +0000 UTC m=+73.155466055" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:58:20.156199572 +0000 UTC m=+75.194401764" watchObservedRunningTime="2024-02-09 09:58:20.156380011 +0000 UTC m=+75.194582243" Feb 9 09:58:20.178529 kubelet[1518]: I0209 09:58:20.178464 1518 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:58:20.178529 kubelet[1518]: E0209 09:58:20.178521 1518 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e561f672-ac0a-434f-8954-fdb4195729c9" containerName="mount-cgroup" Feb 9 09:58:20.178529 kubelet[1518]: I0209 09:58:20.178545 1518 memory_manager.go:346] "RemoveStaleState removing state" podUID="e561f672-ac0a-434f-8954-fdb4195729c9" containerName="mount-cgroup" Feb 9 09:58:20.333672 kubelet[1518]: I0209 09:58:20.333608 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/961e8b67-8ff6-4a2c-a235-776e36fa0dff-cni-path\") pod \"cilium-rsws5\" (UID: \"961e8b67-8ff6-4a2c-a235-776e36fa0dff\") " pod="kube-system/cilium-rsws5" Feb 9 09:58:20.333672 kubelet[1518]: I0209 09:58:20.333674 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/961e8b67-8ff6-4a2c-a235-776e36fa0dff-clustermesh-secrets\") pod \"cilium-rsws5\" (UID: \"961e8b67-8ff6-4a2c-a235-776e36fa0dff\") " pod="kube-system/cilium-rsws5" Feb 9 09:58:20.333842 kubelet[1518]: I0209 09:58:20.333698 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/961e8b67-8ff6-4a2c-a235-776e36fa0dff-cilium-cgroup\") pod \"cilium-rsws5\" (UID: \"961e8b67-8ff6-4a2c-a235-776e36fa0dff\") " pod="kube-system/cilium-rsws5" Feb 9 09:58:20.333842 kubelet[1518]: I0209 09:58:20.333719 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/961e8b67-8ff6-4a2c-a235-776e36fa0dff-hostproc\") pod \"cilium-rsws5\" (UID: \"961e8b67-8ff6-4a2c-a235-776e36fa0dff\") " pod="kube-system/cilium-rsws5" Feb 9 09:58:20.333842 kubelet[1518]: I0209 09:58:20.333741 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/961e8b67-8ff6-4a2c-a235-776e36fa0dff-cilium-ipsec-secrets\") pod \"cilium-rsws5\" (UID: \"961e8b67-8ff6-4a2c-a235-776e36fa0dff\") " pod="kube-system/cilium-rsws5" Feb 9 09:58:20.333842 kubelet[1518]: I0209 09:58:20.333762 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/961e8b67-8ff6-4a2c-a235-776e36fa0dff-host-proc-sys-kernel\") pod \"cilium-rsws5\" (UID: \"961e8b67-8ff6-4a2c-a235-776e36fa0dff\") " pod="kube-system/cilium-rsws5" Feb 9 09:58:20.333842 kubelet[1518]: I0209 09:58:20.333783 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/961e8b67-8ff6-4a2c-a235-776e36fa0dff-cilium-run\") pod \"cilium-rsws5\" (UID: \"961e8b67-8ff6-4a2c-a235-776e36fa0dff\") " pod="kube-system/cilium-rsws5" Feb 9 09:58:20.333842 kubelet[1518]: I0209 09:58:20.333803 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/961e8b67-8ff6-4a2c-a235-776e36fa0dff-xtables-lock\") pod \"cilium-rsws5\" (UID: \"961e8b67-8ff6-4a2c-a235-776e36fa0dff\") " pod="kube-system/cilium-rsws5" Feb 9 09:58:20.333999 kubelet[1518]: I0209 09:58:20.333832 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/961e8b67-8ff6-4a2c-a235-776e36fa0dff-bpf-maps\") pod \"cilium-rsws5\" (UID: \"961e8b67-8ff6-4a2c-a235-776e36fa0dff\") " pod="kube-system/cilium-rsws5" Feb 9 09:58:20.333999 kubelet[1518]: I0209 09:58:20.333853 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/961e8b67-8ff6-4a2c-a235-776e36fa0dff-etc-cni-netd\") pod \"cilium-rsws5\" (UID: \"961e8b67-8ff6-4a2c-a235-776e36fa0dff\") " pod="kube-system/cilium-rsws5" Feb 9 09:58:20.333999 kubelet[1518]: I0209 09:58:20.333874 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/961e8b67-8ff6-4a2c-a235-776e36fa0dff-lib-modules\") pod \"cilium-rsws5\" (UID: \"961e8b67-8ff6-4a2c-a235-776e36fa0dff\") " pod="kube-system/cilium-rsws5" Feb 9 09:58:20.333999 kubelet[1518]: I0209 09:58:20.333894 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/961e8b67-8ff6-4a2c-a235-776e36fa0dff-cilium-config-path\") pod \"cilium-rsws5\" (UID: \"961e8b67-8ff6-4a2c-a235-776e36fa0dff\") " pod="kube-system/cilium-rsws5" Feb 9 09:58:20.333999 kubelet[1518]: I0209 09:58:20.333919 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/961e8b67-8ff6-4a2c-a235-776e36fa0dff-host-proc-sys-net\") pod \"cilium-rsws5\" (UID: \"961e8b67-8ff6-4a2c-a235-776e36fa0dff\") " pod="kube-system/cilium-rsws5" Feb 9 09:58:20.333999 kubelet[1518]: I0209 09:58:20.333939 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/961e8b67-8ff6-4a2c-a235-776e36fa0dff-hubble-tls\") pod \"cilium-rsws5\" (UID: \"961e8b67-8ff6-4a2c-a235-776e36fa0dff\") " pod="kube-system/cilium-rsws5" Feb 9 09:58:20.334138 kubelet[1518]: I0209 09:58:20.333959 1518 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjx5c\" (UniqueName: \"kubernetes.io/projected/961e8b67-8ff6-4a2c-a235-776e36fa0dff-kube-api-access-fjx5c\") pod \"cilium-rsws5\" (UID: \"961e8b67-8ff6-4a2c-a235-776e36fa0dff\") " pod="kube-system/cilium-rsws5" Feb 9 09:58:20.481848 kubelet[1518]: E0209 09:58:20.481741 1518 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:58:20.482666 env[1216]: time="2024-02-09T09:58:20.482261941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rsws5,Uid:961e8b67-8ff6-4a2c-a235-776e36fa0dff,Namespace:kube-system,Attempt:0,}" Feb 9 09:58:20.495843 env[1216]: time="2024-02-09T09:58:20.495768985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:58:20.495843 env[1216]: time="2024-02-09T09:58:20.495813304Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:58:20.495843 env[1216]: time="2024-02-09T09:58:20.495823544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:58:20.496020 env[1216]: time="2024-02-09T09:58:20.495953984Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ead866450e17efbea851049b3b09c42c9a197148857a8217d0d7fc352899c002 pid=3412 runtime=io.containerd.runc.v2 Feb 9 09:58:20.535309 env[1216]: time="2024-02-09T09:58:20.535267403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rsws5,Uid:961e8b67-8ff6-4a2c-a235-776e36fa0dff,Namespace:kube-system,Attempt:0,} returns sandbox id \"ead866450e17efbea851049b3b09c42c9a197148857a8217d0d7fc352899c002\"" Feb 9 09:58:20.536345 kubelet[1518]: E0209 09:58:20.536041 1518 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:58:20.538084 env[1216]: time="2024-02-09T09:58:20.538050107Z" level=info msg="CreateContainer within sandbox \"ead866450e17efbea851049b3b09c42c9a197148857a8217d0d7fc352899c002\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 09:58:20.548187 env[1216]: time="2024-02-09T09:58:20.548146451Z" level=info msg="CreateContainer within sandbox \"ead866450e17efbea851049b3b09c42c9a197148857a8217d0d7fc352899c002\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5be3610d72c053d4c5e428c3a49e3ddce1759038656a72181d71b8001e040a09\"" Feb 9 09:58:20.548674 env[1216]: time="2024-02-09T09:58:20.548583328Z" level=info msg="StartContainer for \"5be3610d72c053d4c5e428c3a49e3ddce1759038656a72181d71b8001e040a09\"" Feb 9 09:58:20.599309 env[1216]: time="2024-02-09T09:58:20.599224884Z" level=info msg="StartContainer for \"5be3610d72c053d4c5e428c3a49e3ddce1759038656a72181d71b8001e040a09\" returns successfully" Feb 9 09:58:20.620682 env[1216]: time="2024-02-09T09:58:20.620628724Z" level=info msg="shim disconnected" id=5be3610d72c053d4c5e428c3a49e3ddce1759038656a72181d71b8001e040a09 Feb 9 09:58:20.620682 env[1216]: time="2024-02-09T09:58:20.620677283Z" level=warning msg="cleaning up after shim disconnected" id=5be3610d72c053d4c5e428c3a49e3ddce1759038656a72181d71b8001e040a09 namespace=k8s.io Feb 9 09:58:20.620682 env[1216]: time="2024-02-09T09:58:20.620687963Z" level=info msg="cleaning up dead shim" Feb 9 09:58:20.628133 env[1216]: time="2024-02-09T09:58:20.628079242Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:58:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3494 runtime=io.containerd.runc.v2\n" Feb 9 09:58:20.902366 kubelet[1518]: E0209 09:58:20.902313 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:20.917506 kubelet[1518]: E0209 09:58:20.917475 1518 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 09:58:21.155079 kubelet[1518]: E0209 09:58:21.154983 1518 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:58:21.155561 kubelet[1518]: E0209 09:58:21.155524 1518 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:58:21.157444 env[1216]: time="2024-02-09T09:58:21.157406683Z" level=info msg="CreateContainer within sandbox \"ead866450e17efbea851049b3b09c42c9a197148857a8217d0d7fc352899c002\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 09:58:21.168203 env[1216]: time="2024-02-09T09:58:21.168149144Z" level=info msg="CreateContainer within sandbox \"ead866450e17efbea851049b3b09c42c9a197148857a8217d0d7fc352899c002\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4d937415dfccdd95ed59f289032f5bd9f36a277b2e5afd2eca0c71c576f63681\"" Feb 9 09:58:21.169122 env[1216]: time="2024-02-09T09:58:21.169060459Z" level=info msg="StartContainer for \"4d937415dfccdd95ed59f289032f5bd9f36a277b2e5afd2eca0c71c576f63681\"" Feb 9 09:58:21.221834 env[1216]: time="2024-02-09T09:58:21.221717368Z" level=info msg="StartContainer for \"4d937415dfccdd95ed59f289032f5bd9f36a277b2e5afd2eca0c71c576f63681\" returns successfully" Feb 9 09:58:21.242658 env[1216]: time="2024-02-09T09:58:21.242602093Z" level=info msg="shim disconnected" id=4d937415dfccdd95ed59f289032f5bd9f36a277b2e5afd2eca0c71c576f63681 Feb 9 09:58:21.242852 env[1216]: time="2024-02-09T09:58:21.242667132Z" level=warning msg="cleaning up after shim disconnected" id=4d937415dfccdd95ed59f289032f5bd9f36a277b2e5afd2eca0c71c576f63681 namespace=k8s.io Feb 9 09:58:21.242852 env[1216]: time="2024-02-09T09:58:21.242677492Z" level=info msg="cleaning up dead shim" Feb 9 09:58:21.249718 env[1216]: time="2024-02-09T09:58:21.249676974Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:58:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3556 runtime=io.containerd.runc.v2\n" Feb 9 09:58:21.902461 kubelet[1518]: E0209 09:58:21.902406 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:21.924760 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d937415dfccdd95ed59f289032f5bd9f36a277b2e5afd2eca0c71c576f63681-rootfs.mount: Deactivated successfully. Feb 9 09:58:22.012999 kubelet[1518]: I0209 09:58:22.012958 1518 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=e561f672-ac0a-434f-8954-fdb4195729c9 path="/var/lib/kubelet/pods/e561f672-ac0a-434f-8954-fdb4195729c9/volumes" Feb 9 09:58:22.158076 kubelet[1518]: E0209 09:58:22.157974 1518 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:58:22.160072 env[1216]: time="2024-02-09T09:58:22.160035559Z" level=info msg="CreateContainer within sandbox \"ead866450e17efbea851049b3b09c42c9a197148857a8217d0d7fc352899c002\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 09:58:22.170720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3266981159.mount: Deactivated successfully. Feb 9 09:58:22.171780 env[1216]: time="2024-02-09T09:58:22.171744095Z" level=info msg="CreateContainer within sandbox \"ead866450e17efbea851049b3b09c42c9a197148857a8217d0d7fc352899c002\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"79c5c78b944bb9ab8dace49d1321322b66cabe2be58bcde4d58f2e3d950023a5\"" Feb 9 09:58:22.172197 env[1216]: time="2024-02-09T09:58:22.172166533Z" level=info msg="StartContainer for \"79c5c78b944bb9ab8dace49d1321322b66cabe2be58bcde4d58f2e3d950023a5\"" Feb 9 09:58:22.223722 env[1216]: time="2024-02-09T09:58:22.223676813Z" level=info msg="StartContainer for \"79c5c78b944bb9ab8dace49d1321322b66cabe2be58bcde4d58f2e3d950023a5\" returns successfully" Feb 9 09:58:22.244520 env[1216]: time="2024-02-09T09:58:22.244465380Z" level=info msg="shim disconnected" id=79c5c78b944bb9ab8dace49d1321322b66cabe2be58bcde4d58f2e3d950023a5 Feb 9 09:58:22.244771 env[1216]: time="2024-02-09T09:58:22.244751459Z" level=warning msg="cleaning up after shim disconnected" id=79c5c78b944bb9ab8dace49d1321322b66cabe2be58bcde4d58f2e3d950023a5 namespace=k8s.io Feb 9 09:58:22.244863 env[1216]: time="2024-02-09T09:58:22.244848618Z" level=info msg="cleaning up dead shim" Feb 9 09:58:22.252224 env[1216]: time="2024-02-09T09:58:22.252186458Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:58:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3613 runtime=io.containerd.runc.v2\n" Feb 9 09:58:22.903075 kubelet[1518]: E0209 09:58:22.903020 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:22.924831 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79c5c78b944bb9ab8dace49d1321322b66cabe2be58bcde4d58f2e3d950023a5-rootfs.mount: Deactivated successfully. Feb 9 09:58:23.161418 kubelet[1518]: E0209 09:58:23.161272 1518 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:58:23.163105 env[1216]: time="2024-02-09T09:58:23.163069040Z" level=info msg="CreateContainer within sandbox \"ead866450e17efbea851049b3b09c42c9a197148857a8217d0d7fc352899c002\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 09:58:23.173815 env[1216]: time="2024-02-09T09:58:23.173770982Z" level=info msg="CreateContainer within sandbox \"ead866450e17efbea851049b3b09c42c9a197148857a8217d0d7fc352899c002\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5a795719309d8d5e4ecc3de6d65c989b36ae109d178354a566cc7d6eead4db05\"" Feb 9 09:58:23.174011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3593539761.mount: Deactivated successfully. Feb 9 09:58:23.175413 env[1216]: time="2024-02-09T09:58:23.175269694Z" level=info msg="StartContainer for \"5a795719309d8d5e4ecc3de6d65c989b36ae109d178354a566cc7d6eead4db05\"" Feb 9 09:58:23.223777 env[1216]: time="2024-02-09T09:58:23.223734235Z" level=info msg="StartContainer for \"5a795719309d8d5e4ecc3de6d65c989b36ae109d178354a566cc7d6eead4db05\" returns successfully" Feb 9 09:58:23.240426 env[1216]: time="2024-02-09T09:58:23.240381546Z" level=info msg="shim disconnected" id=5a795719309d8d5e4ecc3de6d65c989b36ae109d178354a566cc7d6eead4db05 Feb 9 09:58:23.240426 env[1216]: time="2024-02-09T09:58:23.240426946Z" level=warning msg="cleaning up after shim disconnected" id=5a795719309d8d5e4ecc3de6d65c989b36ae109d178354a566cc7d6eead4db05 namespace=k8s.io Feb 9 09:58:23.240643 env[1216]: time="2024-02-09T09:58:23.240437266Z" level=info msg="cleaning up dead shim" Feb 9 09:58:23.246886 env[1216]: time="2024-02-09T09:58:23.246848391Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:58:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3667 runtime=io.containerd.runc.v2\n" Feb 9 09:58:23.903653 kubelet[1518]: E0209 09:58:23.903620 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:23.924901 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a795719309d8d5e4ecc3de6d65c989b36ae109d178354a566cc7d6eead4db05-rootfs.mount: Deactivated successfully. Feb 9 09:58:24.165191 kubelet[1518]: E0209 09:58:24.165099 1518 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:58:24.167205 env[1216]: time="2024-02-09T09:58:24.167170156Z" level=info msg="CreateContainer within sandbox \"ead866450e17efbea851049b3b09c42c9a197148857a8217d0d7fc352899c002\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 09:58:24.177923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount970854266.mount: Deactivated successfully. Feb 9 09:58:24.182160 env[1216]: time="2024-02-09T09:58:24.182116877Z" level=info msg="CreateContainer within sandbox \"ead866450e17efbea851049b3b09c42c9a197148857a8217d0d7fc352899c002\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"665c825a2159edf3f1fc2115acd8b16f741646029282e92695a2136adcbb2096\"" Feb 9 09:58:24.182914 env[1216]: time="2024-02-09T09:58:24.182884233Z" level=info msg="StartContainer for \"665c825a2159edf3f1fc2115acd8b16f741646029282e92695a2136adcbb2096\"" Feb 9 09:58:24.235957 env[1216]: time="2024-02-09T09:58:24.235913914Z" level=info msg="StartContainer for \"665c825a2159edf3f1fc2115acd8b16f741646029282e92695a2136adcbb2096\" returns successfully" Feb 9 09:58:24.462372 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 9 09:58:24.904468 kubelet[1518]: E0209 09:58:24.904415 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:25.170271 kubelet[1518]: E0209 09:58:25.170177 1518 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:58:25.186913 kubelet[1518]: I0209 09:58:25.186858 1518 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-rsws5" podStartSLOduration=5.186824589 pod.CreationTimestamp="2024-02-09 09:58:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:58:25.186675829 +0000 UTC m=+80.224878061" watchObservedRunningTime="2024-02-09 09:58:25.186824589 +0000 UTC m=+80.225026821" Feb 9 09:58:25.851423 kubelet[1518]: E0209 09:58:25.851389 1518 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:25.905058 kubelet[1518]: E0209 09:58:25.905020 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:26.174523 kubelet[1518]: E0209 09:58:26.174409 1518 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:58:26.905294 kubelet[1518]: E0209 09:58:26.905244 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:27.099789 systemd-networkd[1100]: lxc_health: Link UP Feb 9 09:58:27.118668 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 09:58:27.110952 systemd-networkd[1100]: lxc_health: Gained carrier Feb 9 09:58:27.179196 kubelet[1518]: E0209 09:58:27.179064 1518 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:58:27.906399 kubelet[1518]: E0209 09:58:27.906356 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:28.299494 systemd-networkd[1100]: lxc_health: Gained IPv6LL Feb 9 09:58:28.339611 systemd[1]: run-containerd-runc-k8s.io-665c825a2159edf3f1fc2115acd8b16f741646029282e92695a2136adcbb2096-runc.Rs5F6t.mount: Deactivated successfully. Feb 9 09:58:28.483773 kubelet[1518]: E0209 09:58:28.483740 1518 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:58:28.906909 kubelet[1518]: E0209 09:58:28.906862 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:29.181469 kubelet[1518]: E0209 09:58:29.181360 1518 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:58:29.907778 kubelet[1518]: E0209 09:58:29.907734 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:30.183508 kubelet[1518]: E0209 09:58:30.183404 1518 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:58:30.908193 kubelet[1518]: E0209 09:58:30.908133 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:31.908511 kubelet[1518]: E0209 09:58:31.908461 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:32.909119 kubelet[1518]: E0209 09:58:32.909074 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:33.909942 kubelet[1518]: E0209 09:58:33.909897 1518 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"