Feb 9 09:43:21.727518 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 9 09:43:21.727536 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 08:56:26 -00 2024 Feb 9 09:43:21.727544 kernel: efi: EFI v2.70 by EDK II Feb 9 09:43:21.727550 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Feb 9 09:43:21.727555 kernel: random: crng init done Feb 9 09:43:21.727560 kernel: ACPI: Early table checksum verification disabled Feb 9 09:43:21.727566 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Feb 9 09:43:21.727572 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 9 09:43:21.727578 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:43:21.727583 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:43:21.727588 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:43:21.727593 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:43:21.727599 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:43:21.727604 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:43:21.727612 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:43:21.727618 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:43:21.727624 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:43:21.727629 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 9 09:43:21.727635 kernel: NUMA: Failed to initialise from firmware Feb 9 09:43:21.727641 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 09:43:21.727646 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Feb 9 09:43:21.727652 kernel: Zone ranges: Feb 9 09:43:21.727658 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 09:43:21.727664 kernel: DMA32 empty Feb 9 09:43:21.727670 kernel: Normal empty Feb 9 09:43:21.727675 kernel: Movable zone start for each node Feb 9 09:43:21.727681 kernel: Early memory node ranges Feb 9 09:43:21.727686 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Feb 9 09:43:21.727692 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Feb 9 09:43:21.727698 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Feb 9 09:43:21.727703 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Feb 9 09:43:21.727709 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Feb 9 09:43:21.727715 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Feb 9 09:43:21.727720 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Feb 9 09:43:21.727726 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 09:43:21.727733 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 9 09:43:21.727738 kernel: psci: probing for conduit method from ACPI. Feb 9 09:43:21.727744 kernel: psci: PSCIv1.1 detected in firmware. Feb 9 09:43:21.727750 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 09:43:21.727755 kernel: psci: Trusted OS migration not required Feb 9 09:43:21.727763 kernel: psci: SMC Calling Convention v1.1 Feb 9 09:43:21.727770 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 9 09:43:21.727777 kernel: ACPI: SRAT not present Feb 9 09:43:21.727783 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 09:43:21.727789 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 09:43:21.727822 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 9 09:43:21.727828 kernel: Detected PIPT I-cache on CPU0 Feb 9 09:43:21.727834 kernel: CPU features: detected: GIC system register CPU interface Feb 9 09:43:21.727840 kernel: CPU features: detected: Hardware dirty bit management Feb 9 09:43:21.727846 kernel: CPU features: detected: Spectre-v4 Feb 9 09:43:21.727853 kernel: CPU features: detected: Spectre-BHB Feb 9 09:43:21.727861 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 09:43:21.727872 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 09:43:21.727878 kernel: CPU features: detected: ARM erratum 1418040 Feb 9 09:43:21.727884 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 9 09:43:21.727890 kernel: Policy zone: DMA Feb 9 09:43:21.727897 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 09:43:21.727904 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 09:43:21.727910 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 09:43:21.727916 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 09:43:21.727922 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 09:43:21.727928 kernel: Memory: 2459152K/2572288K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 113136K reserved, 0K cma-reserved) Feb 9 09:43:21.727935 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 9 09:43:21.727942 kernel: trace event string verifier disabled Feb 9 09:43:21.727948 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 09:43:21.727954 kernel: rcu: RCU event tracing is enabled. Feb 9 09:43:21.727967 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 9 09:43:21.727973 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 09:43:21.727980 kernel: Tracing variant of Tasks RCU enabled. Feb 9 09:43:21.727986 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 09:43:21.727992 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 9 09:43:21.727998 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 09:43:21.728004 kernel: GICv3: 256 SPIs implemented Feb 9 09:43:21.728011 kernel: GICv3: 0 Extended SPIs implemented Feb 9 09:43:21.728017 kernel: GICv3: Distributor has no Range Selector support Feb 9 09:43:21.728023 kernel: Root IRQ handler: gic_handle_irq Feb 9 09:43:21.728029 kernel: GICv3: 16 PPIs implemented Feb 9 09:43:21.728035 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 9 09:43:21.728041 kernel: ACPI: SRAT not present Feb 9 09:43:21.728047 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 9 09:43:21.728053 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Feb 9 09:43:21.728059 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Feb 9 09:43:21.728065 kernel: GICv3: using LPI property table @0x00000000400d0000 Feb 9 09:43:21.728071 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Feb 9 09:43:21.728077 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:43:21.728084 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 9 09:43:21.728090 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 9 09:43:21.728097 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 9 09:43:21.728103 kernel: arm-pv: using stolen time PV Feb 9 09:43:21.728109 kernel: Console: colour dummy device 80x25 Feb 9 09:43:21.728115 kernel: ACPI: Core revision 20210730 Feb 9 09:43:21.728122 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 9 09:43:21.728128 kernel: pid_max: default: 32768 minimum: 301 Feb 9 09:43:21.728134 kernel: LSM: Security Framework initializing Feb 9 09:43:21.728140 kernel: SELinux: Initializing. Feb 9 09:43:21.728148 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 09:43:21.728154 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 09:43:21.728160 kernel: rcu: Hierarchical SRCU implementation. Feb 9 09:43:21.728166 kernel: Platform MSI: ITS@0x8080000 domain created Feb 9 09:43:21.728172 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 9 09:43:21.728178 kernel: Remapping and enabling EFI services. Feb 9 09:43:21.728184 kernel: smp: Bringing up secondary CPUs ... Feb 9 09:43:21.728190 kernel: Detected PIPT I-cache on CPU1 Feb 9 09:43:21.728197 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 9 09:43:21.728204 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Feb 9 09:43:21.728211 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:43:21.728217 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 9 09:43:21.728223 kernel: Detected PIPT I-cache on CPU2 Feb 9 09:43:21.728230 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 9 09:43:21.728236 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Feb 9 09:43:21.728243 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:43:21.728249 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 9 09:43:21.728255 kernel: Detected PIPT I-cache on CPU3 Feb 9 09:43:21.728261 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 9 09:43:21.728268 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Feb 9 09:43:21.728275 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:43:21.728281 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 9 09:43:21.728287 kernel: smp: Brought up 1 node, 4 CPUs Feb 9 09:43:21.728297 kernel: SMP: Total of 4 processors activated. Feb 9 09:43:21.728305 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 09:43:21.728311 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 9 09:43:21.728318 kernel: CPU features: detected: Common not Private translations Feb 9 09:43:21.728324 kernel: CPU features: detected: CRC32 instructions Feb 9 09:43:21.728330 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 9 09:43:21.728337 kernel: CPU features: detected: LSE atomic instructions Feb 9 09:43:21.728343 kernel: CPU features: detected: Privileged Access Never Feb 9 09:43:21.728351 kernel: CPU features: detected: RAS Extension Support Feb 9 09:43:21.728357 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 9 09:43:21.728364 kernel: CPU: All CPU(s) started at EL1 Feb 9 09:43:21.728370 kernel: alternatives: patching kernel code Feb 9 09:43:21.728377 kernel: devtmpfs: initialized Feb 9 09:43:21.728384 kernel: KASLR enabled Feb 9 09:43:21.728391 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 09:43:21.728398 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 9 09:43:21.728404 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 09:43:21.728411 kernel: SMBIOS 3.0.0 present. Feb 9 09:43:21.728417 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Feb 9 09:43:21.728424 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 09:43:21.728430 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 09:43:21.728437 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 09:43:21.728445 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 09:43:21.728451 kernel: audit: initializing netlink subsys (disabled) Feb 9 09:43:21.728457 kernel: audit: type=2000 audit(0.031:1): state=initialized audit_enabled=0 res=1 Feb 9 09:43:21.728464 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 09:43:21.728470 kernel: cpuidle: using governor menu Feb 9 09:43:21.728477 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 09:43:21.728483 kernel: ASID allocator initialised with 32768 entries Feb 9 09:43:21.728490 kernel: ACPI: bus type PCI registered Feb 9 09:43:21.728496 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 09:43:21.728504 kernel: Serial: AMBA PL011 UART driver Feb 9 09:43:21.728510 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 09:43:21.728517 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 09:43:21.728523 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 09:43:21.728530 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 09:43:21.728536 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 09:43:21.728543 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 09:43:21.728549 kernel: ACPI: Added _OSI(Module Device) Feb 9 09:43:21.728556 kernel: ACPI: Added _OSI(Processor Device) Feb 9 09:43:21.728563 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 09:43:21.728570 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 09:43:21.728576 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 09:43:21.728583 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 09:43:21.728589 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 09:43:21.728596 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 09:43:21.728602 kernel: ACPI: Interpreter enabled Feb 9 09:43:21.728609 kernel: ACPI: Using GIC for interrupt routing Feb 9 09:43:21.728615 kernel: ACPI: MCFG table detected, 1 entries Feb 9 09:43:21.728623 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 9 09:43:21.728629 kernel: printk: console [ttyAMA0] enabled Feb 9 09:43:21.728636 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 09:43:21.728757 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 09:43:21.728830 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 9 09:43:21.728896 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 9 09:43:21.728954 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 9 09:43:21.729015 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 9 09:43:21.729024 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 9 09:43:21.729031 kernel: PCI host bridge to bus 0000:00 Feb 9 09:43:21.729094 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 9 09:43:21.729146 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 9 09:43:21.729198 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 9 09:43:21.729249 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 09:43:21.729320 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 9 09:43:21.729387 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 09:43:21.729460 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 9 09:43:21.729538 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 9 09:43:21.729598 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 9 09:43:21.729656 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 9 09:43:21.729714 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 9 09:43:21.729775 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 9 09:43:21.729875 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 9 09:43:21.729938 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 9 09:43:21.729991 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 9 09:43:21.730000 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 9 09:43:21.730007 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 9 09:43:21.730013 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 9 09:43:21.730020 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 9 09:43:21.730028 kernel: iommu: Default domain type: Translated Feb 9 09:43:21.730035 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 09:43:21.730041 kernel: vgaarb: loaded Feb 9 09:43:21.730048 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 09:43:21.730055 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 09:43:21.730061 kernel: PTP clock support registered Feb 9 09:43:21.730068 kernel: Registered efivars operations Feb 9 09:43:21.730074 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 09:43:21.730081 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 09:43:21.730089 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 09:43:21.730095 kernel: pnp: PnP ACPI init Feb 9 09:43:21.730164 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 9 09:43:21.730173 kernel: pnp: PnP ACPI: found 1 devices Feb 9 09:43:21.730180 kernel: NET: Registered PF_INET protocol family Feb 9 09:43:21.730186 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 09:43:21.730193 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 09:43:21.730200 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 09:43:21.730208 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 09:43:21.730215 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 09:43:21.730221 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 09:43:21.730228 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 09:43:21.730234 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 09:43:21.730241 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 09:43:21.730247 kernel: PCI: CLS 0 bytes, default 64 Feb 9 09:43:21.730254 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 9 09:43:21.730260 kernel: kvm [1]: HYP mode not available Feb 9 09:43:21.730268 kernel: Initialise system trusted keyrings Feb 9 09:43:21.730275 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 09:43:21.730281 kernel: Key type asymmetric registered Feb 9 09:43:21.730288 kernel: Asymmetric key parser 'x509' registered Feb 9 09:43:21.730294 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 09:43:21.730300 kernel: io scheduler mq-deadline registered Feb 9 09:43:21.730307 kernel: io scheduler kyber registered Feb 9 09:43:21.730313 kernel: io scheduler bfq registered Feb 9 09:43:21.730320 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 9 09:43:21.730328 kernel: ACPI: button: Power Button [PWRB] Feb 9 09:43:21.730335 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 9 09:43:21.730394 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 9 09:43:21.730403 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 09:43:21.730410 kernel: thunder_xcv, ver 1.0 Feb 9 09:43:21.730416 kernel: thunder_bgx, ver 1.0 Feb 9 09:43:21.730423 kernel: nicpf, ver 1.0 Feb 9 09:43:21.730429 kernel: nicvf, ver 1.0 Feb 9 09:43:21.730494 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 09:43:21.730556 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T09:43:21 UTC (1707471801) Feb 9 09:43:21.730566 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 09:43:21.730572 kernel: NET: Registered PF_INET6 protocol family Feb 9 09:43:21.730579 kernel: Segment Routing with IPv6 Feb 9 09:43:21.730585 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 09:43:21.730591 kernel: NET: Registered PF_PACKET protocol family Feb 9 09:43:21.730598 kernel: Key type dns_resolver registered Feb 9 09:43:21.730605 kernel: registered taskstats version 1 Feb 9 09:43:21.730612 kernel: Loading compiled-in X.509 certificates Feb 9 09:43:21.730619 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: ca91574208414224935c9cea513398977daf917d' Feb 9 09:43:21.730626 kernel: Key type .fscrypt registered Feb 9 09:43:21.730632 kernel: Key type fscrypt-provisioning registered Feb 9 09:43:21.730639 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 09:43:21.730645 kernel: ima: Allocated hash algorithm: sha1 Feb 9 09:43:21.730651 kernel: ima: No architecture policies found Feb 9 09:43:21.730658 kernel: Freeing unused kernel memory: 34688K Feb 9 09:43:21.730665 kernel: Run /init as init process Feb 9 09:43:21.730672 kernel: with arguments: Feb 9 09:43:21.730679 kernel: /init Feb 9 09:43:21.730685 kernel: with environment: Feb 9 09:43:21.730691 kernel: HOME=/ Feb 9 09:43:21.730697 kernel: TERM=linux Feb 9 09:43:21.730703 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 09:43:21.730712 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:43:21.730720 systemd[1]: Detected virtualization kvm. Feb 9 09:43:21.730729 systemd[1]: Detected architecture arm64. Feb 9 09:43:21.730735 systemd[1]: Running in initrd. Feb 9 09:43:21.730742 systemd[1]: No hostname configured, using default hostname. Feb 9 09:43:21.730749 systemd[1]: Hostname set to . Feb 9 09:43:21.730756 systemd[1]: Initializing machine ID from VM UUID. Feb 9 09:43:21.730763 systemd[1]: Queued start job for default target initrd.target. Feb 9 09:43:21.730770 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:43:21.730777 systemd[1]: Reached target cryptsetup.target. Feb 9 09:43:21.730785 systemd[1]: Reached target paths.target. Feb 9 09:43:21.730807 systemd[1]: Reached target slices.target. Feb 9 09:43:21.730815 systemd[1]: Reached target swap.target. Feb 9 09:43:21.730822 systemd[1]: Reached target timers.target. Feb 9 09:43:21.730829 systemd[1]: Listening on iscsid.socket. Feb 9 09:43:21.730836 systemd[1]: Listening on iscsiuio.socket. Feb 9 09:43:21.730843 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 09:43:21.730852 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 09:43:21.730859 systemd[1]: Listening on systemd-journald.socket. Feb 9 09:43:21.730866 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:43:21.730879 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:43:21.730886 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:43:21.730893 systemd[1]: Reached target sockets.target. Feb 9 09:43:21.730900 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:43:21.730907 systemd[1]: Finished network-cleanup.service. Feb 9 09:43:21.730914 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 09:43:21.730922 systemd[1]: Starting systemd-journald.service... Feb 9 09:43:21.730929 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:43:21.730936 systemd[1]: Starting systemd-resolved.service... Feb 9 09:43:21.730943 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 09:43:21.730950 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:43:21.730957 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 09:43:21.730964 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 09:43:21.730971 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 09:43:21.730977 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 09:43:21.730986 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 09:43:21.730993 kernel: audit: type=1130 audit(1707471801.726:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:21.731003 systemd-journald[290]: Journal started Feb 9 09:43:21.731041 systemd-journald[290]: Runtime Journal (/run/log/journal/67442d716e2e473681b08462e771dd91) is 6.0M, max 48.7M, 42.6M free. Feb 9 09:43:21.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:21.716993 systemd-modules-load[291]: Inserted module 'overlay' Feb 9 09:43:21.732416 systemd[1]: Started systemd-journald.service. Feb 9 09:43:21.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:21.734806 kernel: audit: type=1130 audit(1707471801.732:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:21.736529 systemd-resolved[292]: Positive Trust Anchors: Feb 9 09:43:21.738845 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 09:43:21.736545 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:43:21.736573 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:43:21.745200 kernel: Bridge firewalling registered Feb 9 09:43:21.740532 systemd-modules-load[291]: Inserted module 'br_netfilter' Feb 9 09:43:21.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:21.740617 systemd-resolved[292]: Defaulting to hostname 'linux'. Feb 9 09:43:21.749238 kernel: audit: type=1130 audit(1707471801.744:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:21.741372 systemd[1]: Started systemd-resolved.service. Feb 9 09:43:21.745760 systemd[1]: Reached target nss-lookup.target. Feb 9 09:43:21.753813 kernel: SCSI subsystem initialized Feb 9 09:43:21.755479 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 09:43:21.761360 kernel: audit: type=1130 audit(1707471801.756:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:21.761379 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 09:43:21.761388 kernel: device-mapper: uevent: version 1.0.3 Feb 9 09:43:21.761397 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 09:43:21.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:21.756882 systemd[1]: Starting dracut-cmdline.service... Feb 9 09:43:21.763520 systemd-modules-load[291]: Inserted module 'dm_multipath' Feb 9 09:43:21.764225 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:43:21.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:21.765587 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:43:21.767814 kernel: audit: type=1130 audit(1707471801.764:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:21.769009 dracut-cmdline[307]: dracut-dracut-053 Feb 9 09:43:21.771219 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 09:43:21.772684 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:43:21.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:21.777822 kernel: audit: type=1130 audit(1707471801.774:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:21.829818 kernel: Loading iSCSI transport class v2.0-870. Feb 9 09:43:21.836808 kernel: iscsi: registered transport (tcp) Feb 9 09:43:21.850828 kernel: iscsi: registered transport (qla4xxx) Feb 9 09:43:21.850882 kernel: QLogic iSCSI HBA Driver Feb 9 09:43:21.884320 systemd[1]: Finished dracut-cmdline.service. Feb 9 09:43:21.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:21.885931 systemd[1]: Starting dracut-pre-udev.service... Feb 9 09:43:21.888377 kernel: audit: type=1130 audit(1707471801.884:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:21.929823 kernel: raid6: neonx8 gen() 13785 MB/s Feb 9 09:43:21.946809 kernel: raid6: neonx8 xor() 10812 MB/s Feb 9 09:43:21.963807 kernel: raid6: neonx4 gen() 13555 MB/s Feb 9 09:43:21.980805 kernel: raid6: neonx4 xor() 11247 MB/s Feb 9 09:43:21.997803 kernel: raid6: neonx2 gen() 12974 MB/s Feb 9 09:43:22.014804 kernel: raid6: neonx2 xor() 10231 MB/s Feb 9 09:43:22.031808 kernel: raid6: neonx1 gen() 10510 MB/s Feb 9 09:43:22.048806 kernel: raid6: neonx1 xor() 8788 MB/s Feb 9 09:43:22.065812 kernel: raid6: int64x8 gen() 6290 MB/s Feb 9 09:43:22.082827 kernel: raid6: int64x8 xor() 3550 MB/s Feb 9 09:43:22.099809 kernel: raid6: int64x4 gen() 7208 MB/s Feb 9 09:43:22.116817 kernel: raid6: int64x4 xor() 3853 MB/s Feb 9 09:43:22.133825 kernel: raid6: int64x2 gen() 6155 MB/s Feb 9 09:43:22.150816 kernel: raid6: int64x2 xor() 3322 MB/s Feb 9 09:43:22.167815 kernel: raid6: int64x1 gen() 5047 MB/s Feb 9 09:43:22.184989 kernel: raid6: int64x1 xor() 2647 MB/s Feb 9 09:43:22.185010 kernel: raid6: using algorithm neonx8 gen() 13785 MB/s Feb 9 09:43:22.185027 kernel: raid6: .... xor() 10812 MB/s, rmw enabled Feb 9 09:43:22.185043 kernel: raid6: using neon recovery algorithm Feb 9 09:43:22.195845 kernel: xor: measuring software checksum speed Feb 9 09:43:22.195864 kernel: 8regs : 17297 MB/sec Feb 9 09:43:22.196805 kernel: 32regs : 20755 MB/sec Feb 9 09:43:22.197807 kernel: arm64_neon : 27911 MB/sec Feb 9 09:43:22.197819 kernel: xor: using function: arm64_neon (27911 MB/sec) Feb 9 09:43:22.250816 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 09:43:22.260770 systemd[1]: Finished dracut-pre-udev.service. Feb 9 09:43:22.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:22.262000 audit: BPF prog-id=7 op=LOAD Feb 9 09:43:22.264227 kernel: audit: type=1130 audit(1707471802.260:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:22.264247 kernel: audit: type=1334 audit(1707471802.262:10): prog-id=7 op=LOAD Feb 9 09:43:22.263000 audit: BPF prog-id=8 op=LOAD Feb 9 09:43:22.264603 systemd[1]: Starting systemd-udevd.service... Feb 9 09:43:22.276128 systemd-udevd[490]: Using default interface naming scheme 'v252'. Feb 9 09:43:22.279384 systemd[1]: Started systemd-udevd.service. Feb 9 09:43:22.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:22.281232 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 09:43:22.292365 dracut-pre-trigger[497]: rd.md=0: removing MD RAID activation Feb 9 09:43:22.317957 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 09:43:22.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:22.319251 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:43:22.352646 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:43:22.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:22.382948 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 9 09:43:22.385932 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 09:43:22.385969 kernel: GPT:9289727 != 19775487 Feb 9 09:43:22.385982 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 09:43:22.385992 kernel: GPT:9289727 != 19775487 Feb 9 09:43:22.386881 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 09:43:22.386897 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 09:43:22.405018 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 09:43:22.406090 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 09:43:22.410805 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (550) Feb 9 09:43:22.414229 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 09:43:22.419346 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 09:43:22.422711 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 09:43:22.424332 systemd[1]: Starting disk-uuid.service... Feb 9 09:43:22.430205 disk-uuid[560]: Primary Header is updated. Feb 9 09:43:22.430205 disk-uuid[560]: Secondary Entries is updated. Feb 9 09:43:22.430205 disk-uuid[560]: Secondary Header is updated. Feb 9 09:43:22.433820 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 09:43:23.452269 disk-uuid[561]: The operation has completed successfully. Feb 9 09:43:23.453228 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 09:43:23.476939 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 09:43:23.478023 systemd[1]: Finished disk-uuid.service. Feb 9 09:43:23.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:23.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:23.480309 systemd[1]: Starting verity-setup.service... Feb 9 09:43:23.493809 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 09:43:23.515390 systemd[1]: Found device dev-mapper-usr.device. Feb 9 09:43:23.517452 systemd[1]: Mounting sysusr-usr.mount... Feb 9 09:43:23.519433 systemd[1]: Finished verity-setup.service. Feb 9 09:43:23.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:23.564542 systemd[1]: Mounted sysusr-usr.mount. Feb 9 09:43:23.565666 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 09:43:23.565364 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 09:43:23.566102 systemd[1]: Starting ignition-setup.service... Feb 9 09:43:23.567666 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 09:43:23.574113 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:43:23.574151 kernel: BTRFS info (device vda6): using free space tree Feb 9 09:43:23.574160 kernel: BTRFS info (device vda6): has skinny extents Feb 9 09:43:23.581030 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 09:43:23.587239 systemd[1]: Finished ignition-setup.service. Feb 9 09:43:23.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:23.588656 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 09:43:23.644446 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 09:43:23.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:23.645000 audit: BPF prog-id=9 op=LOAD Feb 9 09:43:23.646464 systemd[1]: Starting systemd-networkd.service... Feb 9 09:43:23.666516 ignition[646]: Ignition 2.14.0 Feb 9 09:43:23.666526 ignition[646]: Stage: fetch-offline Feb 9 09:43:23.666564 ignition[646]: no configs at "/usr/lib/ignition/base.d" Feb 9 09:43:23.666573 ignition[646]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:43:23.666700 ignition[646]: parsed url from cmdline: "" Feb 9 09:43:23.666703 ignition[646]: no config URL provided Feb 9 09:43:23.666708 ignition[646]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 09:43:23.666714 ignition[646]: no config at "/usr/lib/ignition/user.ign" Feb 9 09:43:23.666732 ignition[646]: op(1): [started] loading QEMU firmware config module Feb 9 09:43:23.666737 ignition[646]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 9 09:43:23.672206 systemd-networkd[739]: lo: Link UP Feb 9 09:43:23.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:23.669930 ignition[646]: op(1): [finished] loading QEMU firmware config module Feb 9 09:43:23.672210 systemd-networkd[739]: lo: Gained carrier Feb 9 09:43:23.669950 ignition[646]: QEMU firmware config was not found. Ignoring... Feb 9 09:43:23.672545 systemd-networkd[739]: Enumeration completed Feb 9 09:43:23.672647 systemd[1]: Started systemd-networkd.service. Feb 9 09:43:23.672716 systemd-networkd[739]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:43:23.673746 systemd-networkd[739]: eth0: Link UP Feb 9 09:43:23.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:23.673750 systemd-networkd[739]: eth0: Gained carrier Feb 9 09:43:23.673993 systemd[1]: Reached target network.target. Feb 9 09:43:23.675727 systemd[1]: Starting iscsiuio.service... Feb 9 09:43:23.685099 systemd[1]: Started iscsiuio.service. Feb 9 09:43:23.687049 systemd[1]: Starting iscsid.service... Feb 9 09:43:23.690327 iscsid[746]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:43:23.690327 iscsid[746]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 09:43:23.690327 iscsid[746]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 09:43:23.690327 iscsid[746]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 09:43:23.690327 iscsid[746]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:43:23.690327 iscsid[746]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 09:43:23.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:23.692971 systemd[1]: Started iscsid.service. Feb 9 09:43:23.696436 systemd-networkd[739]: eth0: DHCPv4 address 10.0.0.14/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 09:43:23.697083 systemd[1]: Starting dracut-initqueue.service... Feb 9 09:43:23.707088 systemd[1]: Finished dracut-initqueue.service. Feb 9 09:43:23.708072 systemd[1]: Reached target remote-fs-pre.target. Feb 9 09:43:23.709228 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:43:23.710463 systemd[1]: Reached target remote-fs.target. Feb 9 09:43:23.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:23.712305 systemd[1]: Starting dracut-pre-mount.service... Feb 9 09:43:23.712370 ignition[646]: parsing config with SHA512: 0b8a153e702bc679d7fabd90c896956880419636ca7f12d43b27a7f47c2fef30df23520712231cbfe6a4f970ff2f74654b996024849e73744d96027405fa5153 Feb 9 09:43:23.721150 systemd[1]: Finished dracut-pre-mount.service. Feb 9 09:43:23.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:23.740608 unknown[646]: fetched base config from "system" Feb 9 09:43:23.741345 unknown[646]: fetched user config from "qemu" Feb 9 09:43:23.742492 ignition[646]: fetch-offline: fetch-offline passed Feb 9 09:43:23.743349 ignition[646]: Ignition finished successfully Feb 9 09:43:23.744745 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 09:43:23.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:23.745504 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 09:43:23.746249 systemd[1]: Starting ignition-kargs.service... Feb 9 09:43:23.754430 ignition[761]: Ignition 2.14.0 Feb 9 09:43:23.754439 ignition[761]: Stage: kargs Feb 9 09:43:23.754533 ignition[761]: no configs at "/usr/lib/ignition/base.d" Feb 9 09:43:23.754543 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:43:23.755416 ignition[761]: kargs: kargs passed Feb 9 09:43:23.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:23.757037 systemd[1]: Finished ignition-kargs.service. Feb 9 09:43:23.755461 ignition[761]: Ignition finished successfully Feb 9 09:43:23.759099 systemd[1]: Starting ignition-disks.service... Feb 9 09:43:23.765399 ignition[767]: Ignition 2.14.0 Feb 9 09:43:23.765408 ignition[767]: Stage: disks Feb 9 09:43:23.765501 ignition[767]: no configs at "/usr/lib/ignition/base.d" Feb 9 09:43:23.767118 systemd[1]: Finished ignition-disks.service. Feb 9 09:43:23.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:23.765511 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:43:23.768312 systemd[1]: Reached target initrd-root-device.target. Feb 9 09:43:23.766322 ignition[767]: disks: disks passed Feb 9 09:43:23.769225 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:43:23.766364 ignition[767]: Ignition finished successfully Feb 9 09:43:23.770346 systemd[1]: Reached target local-fs.target. Feb 9 09:43:23.771320 systemd[1]: Reached target sysinit.target. Feb 9 09:43:23.772186 systemd[1]: Reached target basic.target. Feb 9 09:43:23.774006 systemd[1]: Starting systemd-fsck-root.service... Feb 9 09:43:23.784625 systemd-fsck[776]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 9 09:43:23.788635 systemd[1]: Finished systemd-fsck-root.service. Feb 9 09:43:23.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:23.790279 systemd[1]: Mounting sysroot.mount... Feb 9 09:43:23.796543 systemd[1]: Mounted sysroot.mount. Feb 9 09:43:23.797180 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 09:43:23.797665 systemd[1]: Reached target initrd-root-fs.target. Feb 9 09:43:23.799677 systemd[1]: Mounting sysroot-usr.mount... Feb 9 09:43:23.800484 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 09:43:23.800519 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 09:43:23.800541 systemd[1]: Reached target ignition-diskful.target. Feb 9 09:43:23.802256 systemd[1]: Mounted sysroot-usr.mount. Feb 9 09:43:23.803930 systemd[1]: Starting initrd-setup-root.service... Feb 9 09:43:23.807987 initrd-setup-root[786]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 09:43:23.812210 initrd-setup-root[794]: cut: /sysroot/etc/group: No such file or directory Feb 9 09:43:23.815077 initrd-setup-root[802]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 09:43:23.819103 initrd-setup-root[810]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 09:43:23.844533 systemd[1]: Finished initrd-setup-root.service. Feb 9 09:43:23.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:23.845907 systemd[1]: Starting ignition-mount.service... Feb 9 09:43:23.847228 systemd[1]: Starting sysroot-boot.service... Feb 9 09:43:23.852065 bash[827]: umount: /sysroot/usr/share/oem: not mounted. Feb 9 09:43:23.860230 ignition[829]: INFO : Ignition 2.14.0 Feb 9 09:43:23.860230 ignition[829]: INFO : Stage: mount Feb 9 09:43:23.861702 ignition[829]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 09:43:23.861702 ignition[829]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:43:23.861702 ignition[829]: INFO : mount: mount passed Feb 9 09:43:23.861702 ignition[829]: INFO : Ignition finished successfully Feb 9 09:43:23.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:23.862767 systemd[1]: Finished ignition-mount.service. Feb 9 09:43:23.867880 systemd[1]: Finished sysroot-boot.service. Feb 9 09:43:23.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:24.525941 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 09:43:24.531869 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (837) Feb 9 09:43:24.534101 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:43:24.534124 kernel: BTRFS info (device vda6): using free space tree Feb 9 09:43:24.534134 kernel: BTRFS info (device vda6): has skinny extents Feb 9 09:43:24.536967 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 09:43:24.538417 systemd[1]: Starting ignition-files.service... Feb 9 09:43:24.552411 ignition[857]: INFO : Ignition 2.14.0 Feb 9 09:43:24.552411 ignition[857]: INFO : Stage: files Feb 9 09:43:24.553655 ignition[857]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 09:43:24.553655 ignition[857]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:43:24.553655 ignition[857]: DEBUG : files: compiled without relabeling support, skipping Feb 9 09:43:24.556462 ignition[857]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 09:43:24.556462 ignition[857]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 09:43:24.558432 ignition[857]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 09:43:24.558432 ignition[857]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 09:43:24.560589 ignition[857]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 09:43:24.560589 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 09:43:24.560589 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Feb 9 09:43:24.558664 unknown[857]: wrote ssh authorized keys file for user: core Feb 9 09:43:24.834222 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 09:43:25.023249 ignition[857]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Feb 9 09:43:25.025468 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 09:43:25.025468 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 09:43:25.025468 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Feb 9 09:43:25.204466 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 09:43:25.274215 systemd-networkd[739]: eth0: Gained IPv6LL Feb 9 09:43:25.323994 ignition[857]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Feb 9 09:43:25.326108 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 09:43:25.326108 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:43:25.326108 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Feb 9 09:43:25.411408 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 09:43:25.667289 ignition[857]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Feb 9 09:43:25.669510 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:43:25.669510 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:43:25.669510 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Feb 9 09:43:25.693980 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 09:43:26.426610 ignition[857]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Feb 9 09:43:26.428994 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:43:26.428994 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Feb 9 09:43:26.428994 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 09:43:26.428994 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:43:26.428994 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:43:26.428994 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:43:26.428994 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:43:26.428994 ignition[857]: INFO : files: op(a): [started] processing unit "prepare-cni-plugins.service" Feb 9 09:43:26.428994 ignition[857]: INFO : files: op(a): op(b): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:43:26.428994 ignition[857]: INFO : files: op(a): op(b): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:43:26.428994 ignition[857]: INFO : files: op(a): [finished] processing unit "prepare-cni-plugins.service" Feb 9 09:43:26.428994 ignition[857]: INFO : files: op(c): [started] processing unit "prepare-critools.service" Feb 9 09:43:26.428994 ignition[857]: INFO : files: op(c): op(d): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:43:26.428994 ignition[857]: INFO : files: op(c): op(d): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:43:26.428994 ignition[857]: INFO : files: op(c): [finished] processing unit "prepare-critools.service" Feb 9 09:43:26.428994 ignition[857]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Feb 9 09:43:26.428994 ignition[857]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 09:43:26.452040 ignition[857]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 09:43:26.452040 ignition[857]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Feb 9 09:43:26.452040 ignition[857]: INFO : files: op(10): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:43:26.452040 ignition[857]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:43:26.452040 ignition[857]: INFO : files: op(11): [started] setting preset to enabled for "prepare-critools.service" Feb 9 09:43:26.452040 ignition[857]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 09:43:26.452040 ignition[857]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Feb 9 09:43:26.452040 ignition[857]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 09:43:26.463146 ignition[857]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 09:43:26.465207 ignition[857]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Feb 9 09:43:26.465207 ignition[857]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:43:26.465207 ignition[857]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:43:26.465207 ignition[857]: INFO : files: files passed Feb 9 09:43:26.465207 ignition[857]: INFO : Ignition finished successfully Feb 9 09:43:26.473500 kernel: kauditd_printk_skb: 22 callbacks suppressed Feb 9 09:43:26.473521 kernel: audit: type=1130 audit(1707471806.466:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.465312 systemd[1]: Finished ignition-files.service. Feb 9 09:43:26.467704 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 09:43:26.475179 initrd-setup-root-after-ignition[882]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 9 09:43:26.480558 kernel: audit: type=1130 audit(1707471806.475:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.480582 kernel: audit: type=1131 audit(1707471806.475:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.470698 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 09:43:26.482714 initrd-setup-root-after-ignition[884]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 09:43:26.486865 kernel: audit: type=1130 audit(1707471806.480:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.471314 systemd[1]: Starting ignition-quench.service... Feb 9 09:43:26.474705 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 09:43:26.474781 systemd[1]: Finished ignition-quench.service. Feb 9 09:43:26.476213 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 09:43:26.481318 systemd[1]: Reached target ignition-complete.target. Feb 9 09:43:26.484926 systemd[1]: Starting initrd-parse-etc.service... Feb 9 09:43:26.496905 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 09:43:26.496997 systemd[1]: Finished initrd-parse-etc.service. Feb 9 09:43:26.498379 systemd[1]: Reached target initrd-fs.target. Feb 9 09:43:26.503272 kernel: audit: type=1130 audit(1707471806.497:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.503290 kernel: audit: type=1131 audit(1707471806.497:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.502870 systemd[1]: Reached target initrd.target. Feb 9 09:43:26.503920 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 09:43:26.504609 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 09:43:26.514394 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 09:43:26.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.515882 systemd[1]: Starting initrd-cleanup.service... Feb 9 09:43:26.518409 kernel: audit: type=1130 audit(1707471806.514:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.523475 systemd[1]: Stopped target network.target. Feb 9 09:43:26.524267 systemd[1]: Stopped target nss-lookup.target. Feb 9 09:43:26.525356 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 09:43:26.526490 systemd[1]: Stopped target timers.target. Feb 9 09:43:26.527554 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 09:43:26.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.527657 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 09:43:26.531692 kernel: audit: type=1131 audit(1707471806.527:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.528576 systemd[1]: Stopped target initrd.target. Feb 9 09:43:26.531352 systemd[1]: Stopped target basic.target. Feb 9 09:43:26.532358 systemd[1]: Stopped target ignition-complete.target. Feb 9 09:43:26.533557 systemd[1]: Stopped target ignition-diskful.target. Feb 9 09:43:26.534583 systemd[1]: Stopped target initrd-root-device.target. Feb 9 09:43:26.535969 systemd[1]: Stopped target remote-fs.target. Feb 9 09:43:26.536917 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 09:43:26.538129 systemd[1]: Stopped target sysinit.target. Feb 9 09:43:26.539161 systemd[1]: Stopped target local-fs.target. Feb 9 09:43:26.540164 systemd[1]: Stopped target local-fs-pre.target. Feb 9 09:43:26.541218 systemd[1]: Stopped target swap.target. Feb 9 09:43:26.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.542188 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 09:43:26.546558 kernel: audit: type=1131 audit(1707471806.543:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.542295 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 09:43:26.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.543403 systemd[1]: Stopped target cryptsetup.target. Feb 9 09:43:26.550409 kernel: audit: type=1131 audit(1707471806.547:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.546074 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 09:43:26.546174 systemd[1]: Stopped dracut-initqueue.service. Feb 9 09:43:26.547350 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 09:43:26.547447 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 09:43:26.550142 systemd[1]: Stopped target paths.target. Feb 9 09:43:26.551059 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 09:43:26.554836 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 09:43:26.555702 systemd[1]: Stopped target slices.target. Feb 9 09:43:26.556787 systemd[1]: Stopped target sockets.target. Feb 9 09:43:26.557847 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 09:43:26.557930 systemd[1]: Closed iscsid.socket. Feb 9 09:43:26.558905 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 09:43:26.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.558971 systemd[1]: Closed iscsiuio.socket. Feb 9 09:43:26.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.559948 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 09:43:26.560046 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 09:43:26.561092 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 09:43:26.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.561182 systemd[1]: Stopped ignition-files.service. Feb 9 09:43:26.563017 systemd[1]: Stopping ignition-mount.service... Feb 9 09:43:26.563857 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 09:43:26.563985 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 09:43:26.565926 systemd[1]: Stopping sysroot-boot.service... Feb 9 09:43:26.569713 ignition[898]: INFO : Ignition 2.14.0 Feb 9 09:43:26.569713 ignition[898]: INFO : Stage: umount Feb 9 09:43:26.569713 ignition[898]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 09:43:26.569713 ignition[898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:43:26.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.567313 systemd[1]: Stopping systemd-networkd.service... Feb 9 09:43:26.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.574672 ignition[898]: INFO : umount: umount passed Feb 9 09:43:26.574672 ignition[898]: INFO : Ignition finished successfully Feb 9 09:43:26.568104 systemd[1]: Stopping systemd-resolved.service... Feb 9 09:43:26.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.570826 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 09:43:26.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.570972 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 09:43:26.571899 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 09:43:26.572235 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 09:43:26.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.574784 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 09:43:26.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.574839 systemd-networkd[739]: eth0: DHCPv6 lease lost Feb 9 09:43:26.584000 audit: BPF prog-id=9 op=UNLOAD Feb 9 09:43:26.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.575740 systemd[1]: Stopped ignition-mount.service. Feb 9 09:43:26.577783 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 09:43:26.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.578257 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 09:43:26.578338 systemd[1]: Stopped systemd-networkd.service. Feb 9 09:43:26.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.579597 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 09:43:26.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.579659 systemd[1]: Closed systemd-networkd.socket. Feb 9 09:43:26.580767 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 09:43:26.580962 systemd[1]: Stopped ignition-disks.service. Feb 9 09:43:26.582633 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 09:43:26.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.582675 systemd[1]: Stopped ignition-kargs.service. Feb 9 09:43:26.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.584163 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 09:43:26.602000 audit: BPF prog-id=6 op=UNLOAD Feb 9 09:43:26.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.584200 systemd[1]: Stopped ignition-setup.service. Feb 9 09:43:26.586932 systemd[1]: Stopping network-cleanup.service... Feb 9 09:43:26.587531 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 09:43:26.587588 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 09:43:26.588739 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 09:43:26.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.589588 systemd[1]: Stopped systemd-sysctl.service. Feb 9 09:43:26.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.591075 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 09:43:26.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.591116 systemd[1]: Stopped systemd-modules-load.service. Feb 9 09:43:26.592496 systemd[1]: Stopping systemd-udevd.service... Feb 9 09:43:26.597777 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 09:43:26.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.598455 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 09:43:26.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.598544 systemd[1]: Stopped systemd-resolved.service. Feb 9 09:43:26.599897 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 09:43:26.599979 systemd[1]: Finished initrd-cleanup.service. Feb 9 09:43:26.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.601526 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 09:43:26.601682 systemd[1]: Stopped systemd-udevd.service. Feb 9 09:43:26.605235 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 09:43:26.605275 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 09:43:26.606092 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 09:43:26.606122 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 09:43:26.608092 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 09:43:26.608135 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 09:43:26.610211 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 09:43:26.610255 systemd[1]: Stopped dracut-cmdline.service. Feb 9 09:43:26.611349 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 09:43:26.611385 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 09:43:26.613033 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 09:43:26.614161 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 09:43:26.614216 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 09:43:26.615563 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 09:43:26.615654 systemd[1]: Stopped network-cleanup.service. Feb 9 09:43:26.618238 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 09:43:26.618313 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 09:43:26.646316 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 09:43:26.646407 systemd[1]: Stopped sysroot-boot.service. Feb 9 09:43:26.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.647596 systemd[1]: Reached target initrd-switch-root.target. Feb 9 09:43:26.648581 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 09:43:26.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.648631 systemd[1]: Stopped initrd-setup-root.service. Feb 9 09:43:26.650316 systemd[1]: Starting initrd-switch-root.service... Feb 9 09:43:26.656565 systemd[1]: Switching root. Feb 9 09:43:26.675136 iscsid[746]: iscsid shutting down. Feb 9 09:43:26.675624 systemd-journald[290]: Journal stopped Feb 9 09:43:28.727921 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Feb 9 09:43:28.729695 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 09:43:28.729710 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 09:43:28.729720 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 09:43:28.729730 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 09:43:28.729740 kernel: SELinux: policy capability open_perms=1 Feb 9 09:43:28.729756 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 09:43:28.729766 kernel: SELinux: policy capability always_check_network=0 Feb 9 09:43:28.729777 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 09:43:28.729786 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 09:43:28.729800 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 09:43:28.729810 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 09:43:28.729821 systemd[1]: Successfully loaded SELinux policy in 30.903ms. Feb 9 09:43:28.729840 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.065ms. Feb 9 09:43:28.729857 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:43:28.729870 systemd[1]: Detected virtualization kvm. Feb 9 09:43:28.729880 systemd[1]: Detected architecture arm64. Feb 9 09:43:28.729892 systemd[1]: Detected first boot. Feb 9 09:43:28.729903 systemd[1]: Initializing machine ID from VM UUID. Feb 9 09:43:28.729915 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 09:43:28.729925 systemd[1]: Populated /etc with preset unit settings. Feb 9 09:43:28.729936 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:43:28.729947 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:43:28.729959 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:43:28.729970 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 09:43:28.729980 systemd[1]: Stopped iscsiuio.service. Feb 9 09:43:28.729992 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 09:43:28.730003 systemd[1]: Stopped iscsid.service. Feb 9 09:43:28.730014 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 09:43:28.730028 systemd[1]: Stopped initrd-switch-root.service. Feb 9 09:43:28.730038 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 09:43:28.730048 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 09:43:28.730059 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 09:43:28.730069 systemd[1]: Created slice system-getty.slice. Feb 9 09:43:28.730082 systemd[1]: Created slice system-modprobe.slice. Feb 9 09:43:28.730094 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 09:43:28.730105 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 09:43:28.730116 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 09:43:28.730126 systemd[1]: Created slice user.slice. Feb 9 09:43:28.730137 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:43:28.730147 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 09:43:28.730158 systemd[1]: Set up automount boot.automount. Feb 9 09:43:28.730169 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 09:43:28.730180 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 09:43:28.730190 systemd[1]: Stopped target initrd-fs.target. Feb 9 09:43:28.730201 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 09:43:28.730211 systemd[1]: Reached target integritysetup.target. Feb 9 09:43:28.730221 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:43:28.730231 systemd[1]: Reached target remote-fs.target. Feb 9 09:43:28.730241 systemd[1]: Reached target slices.target. Feb 9 09:43:28.730252 systemd[1]: Reached target swap.target. Feb 9 09:43:28.730263 systemd[1]: Reached target torcx.target. Feb 9 09:43:28.730274 systemd[1]: Reached target veritysetup.target. Feb 9 09:43:28.730285 systemd[1]: Listening on systemd-coredump.socket. Feb 9 09:43:28.730295 systemd[1]: Listening on systemd-initctl.socket. Feb 9 09:43:28.730308 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:43:28.730319 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:43:28.730329 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:43:28.730339 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 09:43:28.730349 systemd[1]: Mounting dev-hugepages.mount... Feb 9 09:43:28.730359 systemd[1]: Mounting dev-mqueue.mount... Feb 9 09:43:28.730371 systemd[1]: Mounting media.mount... Feb 9 09:43:28.730382 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 09:43:28.730392 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 09:43:28.730401 systemd[1]: Mounting tmp.mount... Feb 9 09:43:28.730412 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 09:43:28.730422 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 09:43:28.730433 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:43:28.730443 systemd[1]: Starting modprobe@configfs.service... Feb 9 09:43:28.730454 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 09:43:28.730466 systemd[1]: Starting modprobe@drm.service... Feb 9 09:43:28.730476 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 09:43:28.730487 systemd[1]: Starting modprobe@fuse.service... Feb 9 09:43:28.730496 systemd[1]: Starting modprobe@loop.service... Feb 9 09:43:28.730507 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 09:43:28.730517 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 09:43:28.730527 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 09:43:28.730538 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 09:43:28.730548 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 09:43:28.730559 systemd[1]: Stopped systemd-journald.service. Feb 9 09:43:28.730569 kernel: fuse: init (API version 7.34) Feb 9 09:43:28.730579 systemd[1]: Starting systemd-journald.service... Feb 9 09:43:28.730589 kernel: loop: module loaded Feb 9 09:43:28.730600 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:43:28.730612 systemd[1]: Starting systemd-network-generator.service... Feb 9 09:43:28.730623 systemd[1]: Starting systemd-remount-fs.service... Feb 9 09:43:28.730633 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:43:28.730644 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 09:43:28.730654 systemd[1]: Stopped verity-setup.service. Feb 9 09:43:28.730664 systemd[1]: Mounted dev-hugepages.mount. Feb 9 09:43:28.730674 systemd[1]: Mounted dev-mqueue.mount. Feb 9 09:43:28.730685 systemd[1]: Mounted media.mount. Feb 9 09:43:28.730696 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 09:43:28.730706 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 09:43:28.730716 systemd[1]: Mounted tmp.mount. Feb 9 09:43:28.730726 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:43:28.730738 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 09:43:28.730748 systemd[1]: Finished modprobe@configfs.service. Feb 9 09:43:28.730758 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 09:43:28.730768 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 09:43:28.730779 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 09:43:28.730789 systemd[1]: Finished modprobe@drm.service. Feb 9 09:43:28.730854 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 09:43:28.730866 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 09:43:28.730877 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 09:43:28.730890 systemd[1]: Finished modprobe@fuse.service. Feb 9 09:43:28.730905 systemd-journald[997]: Journal started Feb 9 09:43:28.730951 systemd-journald[997]: Runtime Journal (/run/log/journal/67442d716e2e473681b08462e771dd91) is 6.0M, max 48.7M, 42.6M free. Feb 9 09:43:26.734000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 09:43:26.896000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 09:43:26.896000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 09:43:26.896000 audit: BPF prog-id=10 op=LOAD Feb 9 09:43:26.896000 audit: BPF prog-id=10 op=UNLOAD Feb 9 09:43:26.896000 audit: BPF prog-id=11 op=LOAD Feb 9 09:43:26.896000 audit: BPF prog-id=11 op=UNLOAD Feb 9 09:43:26.938000 audit[931]: AVC avc: denied { associate } for pid=931 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 09:43:26.938000 audit[931]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c58b2 a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=914 pid=931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:26.938000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 09:43:26.938000 audit[931]: AVC avc: denied { associate } for pid=931 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 09:43:26.938000 audit[931]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5989 a2=1ed a3=0 items=2 ppid=914 pid=931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:26.938000 audit: CWD cwd="/" Feb 9 09:43:26.938000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:43:26.938000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:43:26.938000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 09:43:28.604000 audit: BPF prog-id=12 op=LOAD Feb 9 09:43:28.604000 audit: BPF prog-id=3 op=UNLOAD Feb 9 09:43:28.604000 audit: BPF prog-id=13 op=LOAD Feb 9 09:43:28.604000 audit: BPF prog-id=14 op=LOAD Feb 9 09:43:28.604000 audit: BPF prog-id=4 op=UNLOAD Feb 9 09:43:28.604000 audit: BPF prog-id=5 op=UNLOAD Feb 9 09:43:28.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:28.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:28.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:28.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:28.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:28.614000 audit: BPF prog-id=12 op=UNLOAD Feb 9 09:43:28.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:28.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:28.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:28.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:28.694000 audit: BPF prog-id=15 op=LOAD Feb 9 09:43:28.694000 audit: BPF prog-id=16 op=LOAD Feb 9 09:43:28.694000 audit: BPF prog-id=17 op=LOAD Feb 9 09:43:28.694000 audit: BPF prog-id=13 op=UNLOAD Feb 9 09:43:28.694000 audit: BPF prog-id=14 op=UNLOAD Feb 9 09:43:28.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:28.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:28.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:28.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:28.723000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 09:43:28.723000 audit[997]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffc3c57e50 a2=4000 a3=1 items=0 ppid=1 pid=997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:28.723000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 09:43:28.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:28.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:28.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:28.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:28.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:28.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:28.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:28.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:28.732108 systemd[1]: Started systemd-journald.service. Feb 9 09:43:28.603115 systemd[1]: Queued start job for default target multi-user.target. Feb 9 09:43:26.937021 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T09:43:26Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:43:28.603127 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 9 09:43:26.937533 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T09:43:26Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 09:43:28.606403 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 09:43:26.937553 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T09:43:26Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 09:43:26.937586 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T09:43:26Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 09:43:26.937596 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T09:43:26Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 09:43:26.937626 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T09:43:26Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 09:43:28.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:26.937637 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T09:43:26Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 09:43:26.937863 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T09:43:26Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 09:43:26.937901 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T09:43:26Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 09:43:28.733222 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 09:43:26.937913 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T09:43:26Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 09:43:26.938353 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T09:43:26Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 09:43:26.938388 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T09:43:26Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 09:43:26.938407 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T09:43:26Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 09:43:26.938421 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T09:43:26Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 09:43:26.938438 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T09:43:26Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 09:43:26.938452 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T09:43:26Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 09:43:28.355821 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T09:43:28Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:43:28.356093 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T09:43:28Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:43:28.356197 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T09:43:28Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:43:28.356354 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T09:43:28Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:43:28.356399 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T09:43:28Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 09:43:28.356454 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-02-09T09:43:28Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 09:43:28.734994 systemd[1]: Finished modprobe@loop.service. Feb 9 09:43:28.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:28.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:28.736410 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:43:28.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:28.737598 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 09:43:28.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:28.738659 systemd[1]: Finished systemd-network-generator.service. Feb 9 09:43:28.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:28.739837 systemd[1]: Finished systemd-remount-fs.service. Feb 9 09:43:28.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:28.741219 systemd[1]: Reached target network-pre.target. Feb 9 09:43:28.743137 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 09:43:28.744934 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 09:43:28.745609 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 09:43:28.748713 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 09:43:28.750709 systemd[1]: Starting systemd-journal-flush.service... Feb 9 09:43:28.751613 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 09:43:28.752601 systemd[1]: Starting systemd-random-seed.service... Feb 9 09:43:28.753550 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 09:43:28.760676 systemd-journald[997]: Time spent on flushing to /var/log/journal/67442d716e2e473681b08462e771dd91 is 11.981ms for 993 entries. Feb 9 09:43:28.760676 systemd-journald[997]: System Journal (/var/log/journal/67442d716e2e473681b08462e771dd91) is 8.0M, max 195.6M, 187.6M free. Feb 9 09:43:28.809292 systemd-journald[997]: Received client request to flush runtime journal. Feb 9 09:43:28.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:28.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:28.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:28.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:28.754589 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:43:28.756425 systemd[1]: Starting systemd-sysusers.service... Feb 9 09:43:28.760143 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:43:28.810032 udevadm[1032]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 09:43:28.761984 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 09:43:28.765712 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 09:43:28.767899 systemd[1]: Starting systemd-udev-settle.service... Feb 9 09:43:28.773472 systemd[1]: Finished systemd-random-seed.service. Feb 9 09:43:28.774479 systemd[1]: Reached target first-boot-complete.target. Feb 9 09:43:28.780833 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:43:28.794065 systemd[1]: Finished systemd-sysusers.service. Feb 9 09:43:28.810225 systemd[1]: Finished systemd-journal-flush.service. Feb 9 09:43:28.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:29.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:29.160948 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 09:43:29.161000 audit: BPF prog-id=18 op=LOAD Feb 9 09:43:29.161000 audit: BPF prog-id=19 op=LOAD Feb 9 09:43:29.161000 audit: BPF prog-id=7 op=UNLOAD Feb 9 09:43:29.161000 audit: BPF prog-id=8 op=UNLOAD Feb 9 09:43:29.163185 systemd[1]: Starting systemd-udevd.service... Feb 9 09:43:29.181781 systemd-udevd[1035]: Using default interface naming scheme 'v252'. Feb 9 09:43:29.196673 systemd[1]: Started systemd-udevd.service. Feb 9 09:43:29.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:29.202000 audit: BPF prog-id=20 op=LOAD Feb 9 09:43:29.204077 systemd[1]: Starting systemd-networkd.service... Feb 9 09:43:29.208000 audit: BPF prog-id=21 op=LOAD Feb 9 09:43:29.208000 audit: BPF prog-id=22 op=LOAD Feb 9 09:43:29.208000 audit: BPF prog-id=23 op=LOAD Feb 9 09:43:29.209738 systemd[1]: Starting systemd-userdbd.service... Feb 9 09:43:29.237354 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Feb 9 09:43:29.246085 systemd[1]: Started systemd-userdbd.service. Feb 9 09:43:29.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:29.278303 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 09:43:29.292452 systemd-networkd[1055]: lo: Link UP Feb 9 09:43:29.292460 systemd-networkd[1055]: lo: Gained carrier Feb 9 09:43:29.292771 systemd-networkd[1055]: Enumeration completed Feb 9 09:43:29.292891 systemd[1]: Started systemd-networkd.service. Feb 9 09:43:29.293146 systemd-networkd[1055]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:43:29.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:29.299431 systemd-networkd[1055]: eth0: Link UP Feb 9 09:43:29.299441 systemd-networkd[1055]: eth0: Gained carrier Feb 9 09:43:29.311090 systemd[1]: Finished systemd-udev-settle.service. Feb 9 09:43:29.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:29.313093 systemd[1]: Starting lvm2-activation-early.service... Feb 9 09:43:29.319921 systemd-networkd[1055]: eth0: DHCPv4 address 10.0.0.14/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 09:43:29.323335 lvm[1068]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:43:29.351677 systemd[1]: Finished lvm2-activation-early.service. Feb 9 09:43:29.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:29.352525 systemd[1]: Reached target cryptsetup.target. Feb 9 09:43:29.354312 systemd[1]: Starting lvm2-activation.service... Feb 9 09:43:29.357830 lvm[1069]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:43:29.390774 systemd[1]: Finished lvm2-activation.service. Feb 9 09:43:29.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:29.391525 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:43:29.392167 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 09:43:29.392195 systemd[1]: Reached target local-fs.target. Feb 9 09:43:29.392729 systemd[1]: Reached target machines.target. Feb 9 09:43:29.394504 systemd[1]: Starting ldconfig.service... Feb 9 09:43:29.395378 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 09:43:29.395431 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:43:29.396451 systemd[1]: Starting systemd-boot-update.service... Feb 9 09:43:29.398166 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 09:43:29.400097 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 09:43:29.400815 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:43:29.400877 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:43:29.401781 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 09:43:29.403418 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1071 (bootctl) Feb 9 09:43:29.406208 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 09:43:29.410371 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 09:43:29.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:29.424857 systemd-tmpfiles[1074]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 09:43:29.426743 systemd-tmpfiles[1074]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 09:43:29.429425 systemd-tmpfiles[1074]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 09:43:29.437438 systemd-fsck[1080]: fsck.fat 4.2 (2021-01-31) Feb 9 09:43:29.437438 systemd-fsck[1080]: /dev/vda1: 236 files, 113719/258078 clusters Feb 9 09:43:29.439221 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 09:43:29.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:29.441303 systemd[1]: Mounting boot.mount... Feb 9 09:43:29.507221 systemd[1]: Mounted boot.mount. Feb 9 09:43:29.509775 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 09:43:29.510362 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 09:43:29.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:29.520202 systemd[1]: Finished systemd-boot-update.service. Feb 9 09:43:29.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:29.582571 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 09:43:29.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:29.584927 systemd[1]: Starting audit-rules.service... Feb 9 09:43:29.586640 systemd[1]: Starting clean-ca-certificates.service... Feb 9 09:43:29.588632 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 09:43:29.589000 audit: BPF prog-id=24 op=LOAD Feb 9 09:43:29.591532 systemd[1]: Starting systemd-resolved.service... Feb 9 09:43:29.592000 audit: BPF prog-id=25 op=LOAD Feb 9 09:43:29.593970 systemd[1]: Starting systemd-timesyncd.service... Feb 9 09:43:29.597890 systemd[1]: Starting systemd-update-utmp.service... Feb 9 09:43:29.599241 systemd[1]: Finished clean-ca-certificates.service. Feb 9 09:43:29.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:29.600634 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 09:43:29.605000 audit[1094]: SYSTEM_BOOT pid=1094 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 09:43:29.608908 systemd[1]: Finished systemd-update-utmp.service. Feb 9 09:43:29.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:29.610784 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 09:43:29.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:29.618833 ldconfig[1070]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 09:43:29.625875 systemd[1]: Finished ldconfig.service. Feb 9 09:43:29.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:29.627715 systemd[1]: Starting systemd-update-done.service... Feb 9 09:43:29.634137 systemd[1]: Finished systemd-update-done.service. Feb 9 09:43:29.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:29.634000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 09:43:29.634000 audit[1104]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc16462d0 a2=420 a3=0 items=0 ppid=1083 pid=1104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:29.634000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 09:43:29.635480 augenrules[1104]: No rules Feb 9 09:43:29.636145 systemd[1]: Finished audit-rules.service. Feb 9 09:43:29.648848 systemd[1]: Started systemd-timesyncd.service. Feb 9 09:43:30.066940 systemd-timesyncd[1090]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 9 09:43:30.066996 systemd-timesyncd[1090]: Initial clock synchronization to Fri 2024-02-09 09:43:30.066863 UTC. Feb 9 09:43:30.067434 systemd[1]: Reached target time-set.target. Feb 9 09:43:30.068775 systemd-resolved[1087]: Positive Trust Anchors: Feb 9 09:43:30.068996 systemd-resolved[1087]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:43:30.069078 systemd-resolved[1087]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:43:30.076638 systemd-resolved[1087]: Defaulting to hostname 'linux'. Feb 9 09:43:30.078140 systemd[1]: Started systemd-resolved.service. Feb 9 09:43:30.078907 systemd[1]: Reached target network.target. Feb 9 09:43:30.079484 systemd[1]: Reached target nss-lookup.target. Feb 9 09:43:30.080039 systemd[1]: Reached target sysinit.target. Feb 9 09:43:30.080685 systemd[1]: Started motdgen.path. Feb 9 09:43:30.081208 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 09:43:30.082096 systemd[1]: Started logrotate.timer. Feb 9 09:43:30.082903 systemd[1]: Started mdadm.timer. Feb 9 09:43:30.083577 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 09:43:30.084353 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 09:43:30.084390 systemd[1]: Reached target paths.target. Feb 9 09:43:30.085041 systemd[1]: Reached target timers.target. Feb 9 09:43:30.086108 systemd[1]: Listening on dbus.socket. Feb 9 09:43:30.087918 systemd[1]: Starting docker.socket... Feb 9 09:43:30.090839 systemd[1]: Listening on sshd.socket. Feb 9 09:43:30.091643 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:43:30.092044 systemd[1]: Listening on docker.socket. Feb 9 09:43:30.092839 systemd[1]: Reached target sockets.target. Feb 9 09:43:30.093550 systemd[1]: Reached target basic.target. Feb 9 09:43:30.094266 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:43:30.094297 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:43:30.095330 systemd[1]: Starting containerd.service... Feb 9 09:43:30.097078 systemd[1]: Starting dbus.service... Feb 9 09:43:30.098837 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 09:43:30.100756 systemd[1]: Starting extend-filesystems.service... Feb 9 09:43:30.101551 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 09:43:30.102707 systemd[1]: Starting motdgen.service... Feb 9 09:43:30.104386 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 09:43:30.108090 systemd[1]: Starting prepare-critools.service... Feb 9 09:43:30.110130 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 09:43:30.112427 systemd[1]: Starting sshd-keygen.service... Feb 9 09:43:30.112701 jq[1114]: false Feb 9 09:43:30.117893 systemd[1]: Starting systemd-logind.service... Feb 9 09:43:30.119443 extend-filesystems[1115]: Found vda Feb 9 09:43:30.119443 extend-filesystems[1115]: Found vda1 Feb 9 09:43:30.119443 extend-filesystems[1115]: Found vda2 Feb 9 09:43:30.119443 extend-filesystems[1115]: Found vda3 Feb 9 09:43:30.119443 extend-filesystems[1115]: Found usr Feb 9 09:43:30.119443 extend-filesystems[1115]: Found vda4 Feb 9 09:43:30.119443 extend-filesystems[1115]: Found vda6 Feb 9 09:43:30.119443 extend-filesystems[1115]: Found vda7 Feb 9 09:43:30.119443 extend-filesystems[1115]: Found vda9 Feb 9 09:43:30.119443 extend-filesystems[1115]: Checking size of /dev/vda9 Feb 9 09:43:30.118635 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:43:30.118742 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 09:43:30.119152 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 09:43:30.131409 jq[1135]: true Feb 9 09:43:30.119868 systemd[1]: Starting update-engine.service... Feb 9 09:43:30.122429 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 09:43:30.125768 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 09:43:30.125975 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 09:43:30.126607 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 09:43:30.126758 systemd[1]: Finished motdgen.service. Feb 9 09:43:30.130177 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 09:43:30.130650 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 09:43:30.143536 jq[1139]: true Feb 9 09:43:30.152997 tar[1137]: ./ Feb 9 09:43:30.152997 tar[1137]: ./macvlan Feb 9 09:43:30.163159 tar[1138]: crictl Feb 9 09:43:30.172606 extend-filesystems[1115]: Resized partition /dev/vda9 Feb 9 09:43:30.177459 dbus-daemon[1113]: [system] SELinux support is enabled Feb 9 09:43:30.178486 systemd[1]: Started dbus.service. Feb 9 09:43:30.179647 extend-filesystems[1161]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 09:43:30.180877 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 09:43:30.180908 systemd[1]: Reached target system-config.target. Feb 9 09:43:30.181594 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 09:43:30.181612 systemd[1]: Reached target user-config.target. Feb 9 09:43:30.196604 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 9 09:43:30.202961 systemd-logind[1131]: Watching system buttons on /dev/input/event0 (Power Button) Feb 9 09:43:30.203588 systemd-logind[1131]: New seat seat0. Feb 9 09:43:30.205095 systemd[1]: Started systemd-logind.service. Feb 9 09:43:30.227944 update_engine[1133]: I0209 09:43:30.227659 1133 main.cc:92] Flatcar Update Engine starting Feb 9 09:43:30.232724 systemd[1]: Started update-engine.service. Feb 9 09:43:30.233939 update_engine[1133]: I0209 09:43:30.232729 1133 update_check_scheduler.cc:74] Next update check in 8m48s Feb 9 09:43:30.235356 systemd[1]: Started locksmithd.service. Feb 9 09:43:30.247604 tar[1137]: ./static Feb 9 09:43:30.262231 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 9 09:43:30.291598 env[1140]: time="2024-02-09T09:43:30.290766125Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 09:43:30.294394 bash[1167]: Updated "/home/core/.ssh/authorized_keys" Feb 9 09:43:30.296806 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 09:43:30.300480 extend-filesystems[1161]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 9 09:43:30.300480 extend-filesystems[1161]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 09:43:30.300480 extend-filesystems[1161]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 9 09:43:30.304715 extend-filesystems[1115]: Resized filesystem in /dev/vda9 Feb 9 09:43:30.302100 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 09:43:30.302320 systemd[1]: Finished extend-filesystems.service. Feb 9 09:43:30.305635 tar[1137]: ./vlan Feb 9 09:43:30.321139 env[1140]: time="2024-02-09T09:43:30.321078725Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 09:43:30.321299 env[1140]: time="2024-02-09T09:43:30.321281805Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:43:30.322663 env[1140]: time="2024-02-09T09:43:30.322621725Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:43:30.322663 env[1140]: time="2024-02-09T09:43:30.322657685Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:43:30.322907 env[1140]: time="2024-02-09T09:43:30.322881085Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:43:30.322954 env[1140]: time="2024-02-09T09:43:30.322906205Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 09:43:30.322954 env[1140]: time="2024-02-09T09:43:30.322920085Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 09:43:30.322954 env[1140]: time="2024-02-09T09:43:30.322930165Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 09:43:30.323016 env[1140]: time="2024-02-09T09:43:30.323005085Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:43:30.323344 env[1140]: time="2024-02-09T09:43:30.323317045Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:43:30.323485 env[1140]: time="2024-02-09T09:43:30.323457565Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:43:30.323485 env[1140]: time="2024-02-09T09:43:30.323479685Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 09:43:30.323555 env[1140]: time="2024-02-09T09:43:30.323536525Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 09:43:30.323555 env[1140]: time="2024-02-09T09:43:30.323553125Z" level=info msg="metadata content store policy set" policy=shared Feb 9 09:43:30.339164 tar[1137]: ./portmap Feb 9 09:43:30.354271 locksmithd[1168]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 09:43:30.369387 tar[1137]: ./host-local Feb 9 09:43:30.376202 env[1140]: time="2024-02-09T09:43:30.376148125Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 09:43:30.376324 env[1140]: time="2024-02-09T09:43:30.376231845Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 09:43:30.376324 env[1140]: time="2024-02-09T09:43:30.376255005Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 09:43:30.376324 env[1140]: time="2024-02-09T09:43:30.376297205Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 09:43:30.376324 env[1140]: time="2024-02-09T09:43:30.376320605Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 09:43:30.376404 env[1140]: time="2024-02-09T09:43:30.376337125Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 09:43:30.376404 env[1140]: time="2024-02-09T09:43:30.376358765Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 09:43:30.376769 env[1140]: time="2024-02-09T09:43:30.376742005Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 09:43:30.376817 env[1140]: time="2024-02-09T09:43:30.376768725Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 09:43:30.376817 env[1140]: time="2024-02-09T09:43:30.376783245Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 09:43:30.376817 env[1140]: time="2024-02-09T09:43:30.376807325Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 09:43:30.376874 env[1140]: time="2024-02-09T09:43:30.376823605Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 09:43:30.377010 env[1140]: time="2024-02-09T09:43:30.376985285Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 09:43:30.377100 env[1140]: time="2024-02-09T09:43:30.377081965Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 09:43:30.377383 env[1140]: time="2024-02-09T09:43:30.377344685Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 09:43:30.377423 env[1140]: time="2024-02-09T09:43:30.377391445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 09:43:30.377423 env[1140]: time="2024-02-09T09:43:30.377404965Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 09:43:30.377543 env[1140]: time="2024-02-09T09:43:30.377524405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 09:43:30.377575 env[1140]: time="2024-02-09T09:43:30.377543765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 09:43:30.377575 env[1140]: time="2024-02-09T09:43:30.377556805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 09:43:30.377575 env[1140]: time="2024-02-09T09:43:30.377567925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 09:43:30.377644 env[1140]: time="2024-02-09T09:43:30.377593645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 09:43:30.377644 env[1140]: time="2024-02-09T09:43:30.377606645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 09:43:30.377644 env[1140]: time="2024-02-09T09:43:30.377618045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 09:43:30.377697 env[1140]: time="2024-02-09T09:43:30.377629565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 09:43:30.377720 env[1140]: time="2024-02-09T09:43:30.377695925Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 09:43:30.377872 env[1140]: time="2024-02-09T09:43:30.377847245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 09:43:30.377907 env[1140]: time="2024-02-09T09:43:30.377878805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 09:43:30.377907 env[1140]: time="2024-02-09T09:43:30.377892285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 09:43:30.377907 env[1140]: time="2024-02-09T09:43:30.377903685Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 09:43:30.377962 env[1140]: time="2024-02-09T09:43:30.377920765Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 09:43:30.377962 env[1140]: time="2024-02-09T09:43:30.377932605Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 09:43:30.377962 env[1140]: time="2024-02-09T09:43:30.377957605Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 09:43:30.378025 env[1140]: time="2024-02-09T09:43:30.377991765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 09:43:30.378284 env[1140]: time="2024-02-09T09:43:30.378215085Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 09:43:30.380699 env[1140]: time="2024-02-09T09:43:30.378287965Z" level=info msg="Connect containerd service" Feb 9 09:43:30.380699 env[1140]: time="2024-02-09T09:43:30.378328005Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 09:43:30.380699 env[1140]: time="2024-02-09T09:43:30.379074165Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 09:43:30.380699 env[1140]: time="2024-02-09T09:43:30.379344885Z" level=info msg="Start subscribing containerd event" Feb 9 09:43:30.380699 env[1140]: time="2024-02-09T09:43:30.379409045Z" level=info msg="Start recovering state" Feb 9 09:43:30.380699 env[1140]: time="2024-02-09T09:43:30.379477525Z" level=info msg="Start event monitor" Feb 9 09:43:30.380699 env[1140]: time="2024-02-09T09:43:30.379492845Z" level=info msg="Start snapshots syncer" Feb 9 09:43:30.380699 env[1140]: time="2024-02-09T09:43:30.379502445Z" level=info msg="Start cni network conf syncer for default" Feb 9 09:43:30.380699 env[1140]: time="2024-02-09T09:43:30.379511805Z" level=info msg="Start streaming server" Feb 9 09:43:30.380699 env[1140]: time="2024-02-09T09:43:30.379653125Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 09:43:30.380699 env[1140]: time="2024-02-09T09:43:30.379717685Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 09:43:30.379868 systemd[1]: Started containerd.service. Feb 9 09:43:30.385653 env[1140]: time="2024-02-09T09:43:30.379777165Z" level=info msg="containerd successfully booted in 0.104611s" Feb 9 09:43:30.397474 tar[1137]: ./vrf Feb 9 09:43:30.426833 tar[1137]: ./bridge Feb 9 09:43:30.461059 tar[1137]: ./tuning Feb 9 09:43:30.489034 tar[1137]: ./firewall Feb 9 09:43:30.524072 tar[1137]: ./host-device Feb 9 09:43:30.555106 tar[1137]: ./sbr Feb 9 09:43:30.583137 tar[1137]: ./loopback Feb 9 09:43:30.610640 tar[1137]: ./dhcp Feb 9 09:43:30.632507 systemd[1]: Finished prepare-critools.service. Feb 9 09:43:30.679273 tar[1137]: ./ptp Feb 9 09:43:30.707140 tar[1137]: ./ipvlan Feb 9 09:43:30.734329 tar[1137]: ./bandwidth Feb 9 09:43:30.772296 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 09:43:31.515480 systemd-networkd[1055]: eth0: Gained IPv6LL Feb 9 09:43:32.068659 sshd_keygen[1134]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 09:43:32.085835 systemd[1]: Finished sshd-keygen.service. Feb 9 09:43:32.088003 systemd[1]: Starting issuegen.service... Feb 9 09:43:32.092172 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 09:43:32.092331 systemd[1]: Finished issuegen.service. Feb 9 09:43:32.094341 systemd[1]: Starting systemd-user-sessions.service... Feb 9 09:43:32.100365 systemd[1]: Finished systemd-user-sessions.service. Feb 9 09:43:32.102451 systemd[1]: Started getty@tty1.service. Feb 9 09:43:32.104390 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 9 09:43:32.105182 systemd[1]: Reached target getty.target. Feb 9 09:43:32.105966 systemd[1]: Reached target multi-user.target. Feb 9 09:43:32.107705 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 09:43:32.114223 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 09:43:32.114400 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 09:43:32.115393 systemd[1]: Startup finished in 557ms (kernel) + 5.126s (initrd) + 4.995s (userspace) = 10.679s. Feb 9 09:43:34.918874 systemd[1]: Created slice system-sshd.slice. Feb 9 09:43:34.920020 systemd[1]: Started sshd@0-10.0.0.14:22-10.0.0.1:37294.service. Feb 9 09:43:34.972210 sshd[1199]: Accepted publickey for core from 10.0.0.1 port 37294 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:43:34.974149 sshd[1199]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:43:34.984394 systemd-logind[1131]: New session 1 of user core. Feb 9 09:43:34.985276 systemd[1]: Created slice user-500.slice. Feb 9 09:43:34.986395 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 09:43:34.994098 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 09:43:34.995485 systemd[1]: Starting user@500.service... Feb 9 09:43:34.998049 (systemd)[1202]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:43:35.059112 systemd[1202]: Queued start job for default target default.target. Feb 9 09:43:35.059633 systemd[1202]: Reached target paths.target. Feb 9 09:43:35.059653 systemd[1202]: Reached target sockets.target. Feb 9 09:43:35.059665 systemd[1202]: Reached target timers.target. Feb 9 09:43:35.059675 systemd[1202]: Reached target basic.target. Feb 9 09:43:35.059731 systemd[1202]: Reached target default.target. Feb 9 09:43:35.059759 systemd[1202]: Startup finished in 56ms. Feb 9 09:43:35.059804 systemd[1]: Started user@500.service. Feb 9 09:43:35.060836 systemd[1]: Started session-1.scope. Feb 9 09:43:35.111841 systemd[1]: Started sshd@1-10.0.0.14:22-10.0.0.1:37304.service. Feb 9 09:43:35.161295 sshd[1211]: Accepted publickey for core from 10.0.0.1 port 37304 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:43:35.162515 sshd[1211]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:43:35.166084 systemd-logind[1131]: New session 2 of user core. Feb 9 09:43:35.166518 systemd[1]: Started session-2.scope. Feb 9 09:43:35.220739 sshd[1211]: pam_unix(sshd:session): session closed for user core Feb 9 09:43:35.224139 systemd[1]: Started sshd@2-10.0.0.14:22-10.0.0.1:37316.service. Feb 9 09:43:35.224586 systemd[1]: sshd@1-10.0.0.14:22-10.0.0.1:37304.service: Deactivated successfully. Feb 9 09:43:35.225318 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 09:43:35.225770 systemd-logind[1131]: Session 2 logged out. Waiting for processes to exit. Feb 9 09:43:35.226743 systemd-logind[1131]: Removed session 2. Feb 9 09:43:35.271426 sshd[1216]: Accepted publickey for core from 10.0.0.1 port 37316 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:43:35.272688 sshd[1216]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:43:35.276608 systemd-logind[1131]: New session 3 of user core. Feb 9 09:43:35.277380 systemd[1]: Started session-3.scope. Feb 9 09:43:35.325990 sshd[1216]: pam_unix(sshd:session): session closed for user core Feb 9 09:43:35.328978 systemd[1]: sshd@2-10.0.0.14:22-10.0.0.1:37316.service: Deactivated successfully. Feb 9 09:43:35.329601 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 09:43:35.330070 systemd-logind[1131]: Session 3 logged out. Waiting for processes to exit. Feb 9 09:43:35.331099 systemd[1]: Started sshd@3-10.0.0.14:22-10.0.0.1:37332.service. Feb 9 09:43:35.331729 systemd-logind[1131]: Removed session 3. Feb 9 09:43:35.374077 sshd[1223]: Accepted publickey for core from 10.0.0.1 port 37332 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:43:35.375582 sshd[1223]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:43:35.378825 systemd-logind[1131]: New session 4 of user core. Feb 9 09:43:35.379634 systemd[1]: Started session-4.scope. Feb 9 09:43:35.432758 sshd[1223]: pam_unix(sshd:session): session closed for user core Feb 9 09:43:35.435065 systemd[1]: sshd@3-10.0.0.14:22-10.0.0.1:37332.service: Deactivated successfully. Feb 9 09:43:35.435644 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 09:43:35.436137 systemd-logind[1131]: Session 4 logged out. Waiting for processes to exit. Feb 9 09:43:35.437119 systemd[1]: Started sshd@4-10.0.0.14:22-10.0.0.1:37334.service. Feb 9 09:43:35.437730 systemd-logind[1131]: Removed session 4. Feb 9 09:43:35.479469 sshd[1229]: Accepted publickey for core from 10.0.0.1 port 37334 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:43:35.480654 sshd[1229]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:43:35.483808 systemd-logind[1131]: New session 5 of user core. Feb 9 09:43:35.484624 systemd[1]: Started session-5.scope. Feb 9 09:43:35.574034 sudo[1232]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 09:43:35.574266 sudo[1232]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 09:43:36.080054 systemd[1]: Reloading. Feb 9 09:43:36.118826 /usr/lib/systemd/system-generators/torcx-generator[1262]: time="2024-02-09T09:43:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:43:36.118856 /usr/lib/systemd/system-generators/torcx-generator[1262]: time="2024-02-09T09:43:36Z" level=info msg="torcx already run" Feb 9 09:43:36.169939 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:43:36.169957 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:43:36.185333 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:43:36.246494 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 09:43:36.254053 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 09:43:36.254703 systemd[1]: Reached target network-online.target. Feb 9 09:43:36.256319 systemd[1]: Started kubelet.service. Feb 9 09:43:36.266897 systemd[1]: Starting coreos-metadata.service... Feb 9 09:43:36.273687 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 9 09:43:36.273882 systemd[1]: Finished coreos-metadata.service. Feb 9 09:43:36.425139 kubelet[1300]: E0209 09:43:36.425005 1300 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 09:43:36.427891 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:43:36.428018 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:43:36.549980 systemd[1]: Stopped kubelet.service. Feb 9 09:43:36.565960 systemd[1]: Reloading. Feb 9 09:43:36.602086 /usr/lib/systemd/system-generators/torcx-generator[1370]: time="2024-02-09T09:43:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:43:36.602118 /usr/lib/systemd/system-generators/torcx-generator[1370]: time="2024-02-09T09:43:36Z" level=info msg="torcx already run" Feb 9 09:43:36.657478 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:43:36.657500 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:43:36.673169 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:43:36.739529 systemd[1]: Started kubelet.service. Feb 9 09:43:36.778579 kubelet[1406]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:43:36.778579 kubelet[1406]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:43:36.778935 kubelet[1406]: I0209 09:43:36.778757 1406 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 09:43:36.779992 kubelet[1406]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:43:36.779992 kubelet[1406]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:43:37.592448 kubelet[1406]: I0209 09:43:37.592409 1406 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 09:43:37.592448 kubelet[1406]: I0209 09:43:37.592441 1406 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 09:43:37.592678 kubelet[1406]: I0209 09:43:37.592661 1406 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 09:43:37.596758 kubelet[1406]: I0209 09:43:37.596739 1406 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:43:37.599529 kubelet[1406]: W0209 09:43:37.599504 1406 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 09:43:37.600337 kubelet[1406]: I0209 09:43:37.600311 1406 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 09:43:37.600707 kubelet[1406]: I0209 09:43:37.600685 1406 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 09:43:37.600759 kubelet[1406]: I0209 09:43:37.600749 1406 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 09:43:37.600838 kubelet[1406]: I0209 09:43:37.600828 1406 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 09:43:37.600838 kubelet[1406]: I0209 09:43:37.600839 1406 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 09:43:37.601011 kubelet[1406]: I0209 09:43:37.600987 1406 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:43:37.604844 kubelet[1406]: I0209 09:43:37.604823 1406 kubelet.go:398] "Attempting to sync node with API server" Feb 9 09:43:37.604907 kubelet[1406]: I0209 09:43:37.604850 1406 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 09:43:37.605162 kubelet[1406]: I0209 09:43:37.605103 1406 kubelet.go:297] "Adding apiserver pod source" Feb 9 09:43:37.605162 kubelet[1406]: I0209 09:43:37.605122 1406 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 09:43:37.605234 kubelet[1406]: E0209 09:43:37.605174 1406 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:43:37.605270 kubelet[1406]: E0209 09:43:37.605249 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:43:37.606176 kubelet[1406]: I0209 09:43:37.606158 1406 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 09:43:37.607133 kubelet[1406]: W0209 09:43:37.607114 1406 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 09:43:37.607696 kubelet[1406]: I0209 09:43:37.607675 1406 server.go:1186] "Started kubelet" Feb 9 09:43:37.608555 kubelet[1406]: E0209 09:43:37.608524 1406 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 09:43:37.608555 kubelet[1406]: E0209 09:43:37.608547 1406 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 09:43:37.609776 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 09:43:37.609829 kubelet[1406]: I0209 09:43:37.609048 1406 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 09:43:37.609829 kubelet[1406]: I0209 09:43:37.609616 1406 server.go:451] "Adding debug handlers to kubelet server" Feb 9 09:43:37.610020 kubelet[1406]: I0209 09:43:37.610000 1406 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 09:43:37.610893 kubelet[1406]: E0209 09:43:37.610878 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:37.611004 kubelet[1406]: I0209 09:43:37.610993 1406 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 09:43:37.611057 kubelet[1406]: I0209 09:43:37.611049 1406 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 09:43:37.619195 kubelet[1406]: E0209 09:43:37.618376 1406 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "10.0.0.14" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 09:43:37.619195 kubelet[1406]: W0209 09:43:37.618466 1406 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.14" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:43:37.619195 kubelet[1406]: E0209 09:43:37.618491 1406 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.14" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:43:37.619195 kubelet[1406]: W0209 09:43:37.618560 1406 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 09:43:37.619195 kubelet[1406]: E0209 09:43:37.618574 1406 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 09:43:37.619357 kubelet[1406]: E0209 09:43:37.618595 1406 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.14.17b22892507e9c15", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.14", UID:"10.0.0.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.14"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 43, 37, 607650325, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 43, 37, 607650325, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:43:37.619357 kubelet[1406]: W0209 09:43:37.618974 1406 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:43:37.619357 kubelet[1406]: E0209 09:43:37.618991 1406 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:43:37.621954 kubelet[1406]: E0209 09:43:37.621863 1406 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.14.17b22892508c2e9d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.14", UID:"10.0.0.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.14"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 43, 37, 608539805, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 43, 37, 608539805, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:43:37.637761 kubelet[1406]: I0209 09:43:37.637714 1406 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 09:43:37.637761 kubelet[1406]: I0209 09:43:37.637749 1406 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 09:43:37.637857 kubelet[1406]: I0209 09:43:37.637768 1406 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:43:37.638241 kubelet[1406]: E0209 09:43:37.638140 1406 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.14.17b22892523e94b5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.14", UID:"10.0.0.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.14 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.14"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 43, 37, 637008565, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 43, 37, 637008565, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:43:37.639111 kubelet[1406]: E0209 09:43:37.639047 1406 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.14.17b22892523ea7c5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.14", UID:"10.0.0.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.14 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.14"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 43, 37, 637013445, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 43, 37, 637013445, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:43:37.639556 kubelet[1406]: I0209 09:43:37.639537 1406 policy_none.go:49] "None policy: Start" Feb 9 09:43:37.640046 kubelet[1406]: I0209 09:43:37.640033 1406 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 09:43:37.640133 kubelet[1406]: E0209 09:43:37.639994 1406 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.14.17b22892523eb995", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.14", UID:"10.0.0.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.14 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.14"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 43, 37, 637018005, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 43, 37, 637018005, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:43:37.640234 kubelet[1406]: I0209 09:43:37.640123 1406 state_mem.go:35] "Initializing new in-memory state store" Feb 9 09:43:37.644710 systemd[1]: Created slice kubepods.slice. Feb 9 09:43:37.648045 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 09:43:37.650107 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 09:43:37.656776 kubelet[1406]: I0209 09:43:37.656754 1406 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 09:43:37.656946 kubelet[1406]: I0209 09:43:37.656930 1406 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 09:43:37.658205 kubelet[1406]: E0209 09:43:37.658171 1406 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.14\" not found" Feb 9 09:43:37.659070 kubelet[1406]: E0209 09:43:37.658932 1406 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.14.17b22892537debcd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.14", UID:"10.0.0.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.14"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 43, 37, 657936845, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 43, 37, 657936845, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:43:37.712803 kubelet[1406]: I0209 09:43:37.712744 1406 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.14" Feb 9 09:43:37.714141 kubelet[1406]: E0209 09:43:37.714111 1406 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.14" Feb 9 09:43:37.714273 kubelet[1406]: E0209 09:43:37.714083 1406 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.14.17b22892523e94b5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.14", UID:"10.0.0.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.14 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.14"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 43, 37, 637008565, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 43, 37, 712632445, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.14.17b22892523e94b5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:43:37.715270 kubelet[1406]: E0209 09:43:37.715201 1406 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.14.17b22892523ea7c5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.14", UID:"10.0.0.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.14 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.14"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 43, 37, 637013445, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 43, 37, 712698205, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.14.17b22892523ea7c5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:43:37.716278 kubelet[1406]: E0209 09:43:37.716223 1406 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.14.17b22892523eb995", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.14", UID:"10.0.0.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.14 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.14"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 43, 37, 637018005, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 43, 37, 712716005, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.14.17b22892523eb995" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:43:37.732629 kubelet[1406]: I0209 09:43:37.732600 1406 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 09:43:37.752123 kubelet[1406]: I0209 09:43:37.752093 1406 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 09:43:37.752123 kubelet[1406]: I0209 09:43:37.752117 1406 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 09:43:37.752258 kubelet[1406]: I0209 09:43:37.752136 1406 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 09:43:37.752258 kubelet[1406]: E0209 09:43:37.752195 1406 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 09:43:37.753847 kubelet[1406]: W0209 09:43:37.753821 1406 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 09:43:37.753847 kubelet[1406]: E0209 09:43:37.753851 1406 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 09:43:37.820057 kubelet[1406]: E0209 09:43:37.820026 1406 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "10.0.0.14" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 09:43:37.915178 kubelet[1406]: I0209 09:43:37.915085 1406 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.14" Feb 9 09:43:37.916488 kubelet[1406]: E0209 09:43:37.916410 1406 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.14" Feb 9 09:43:37.916658 kubelet[1406]: E0209 09:43:37.916502 1406 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.14.17b22892523e94b5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.14", UID:"10.0.0.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.14 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.14"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 43, 37, 637008565, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 43, 37, 915022845, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.14.17b22892523e94b5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:43:37.917870 kubelet[1406]: E0209 09:43:37.917765 1406 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.14.17b22892523ea7c5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.14", UID:"10.0.0.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.14 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.14"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 43, 37, 637013445, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 43, 37, 915055165, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.14.17b22892523ea7c5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:43:38.010598 kubelet[1406]: E0209 09:43:38.010508 1406 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.14.17b22892523eb995", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.14", UID:"10.0.0.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.14 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.14"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 43, 37, 637018005, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 43, 37, 915059245, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.14.17b22892523eb995" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:43:38.221991 kubelet[1406]: E0209 09:43:38.221895 1406 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "10.0.0.14" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 09:43:38.318059 kubelet[1406]: I0209 09:43:38.318032 1406 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.14" Feb 9 09:43:38.319424 kubelet[1406]: E0209 09:43:38.319399 1406 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.14" Feb 9 09:43:38.319559 kubelet[1406]: E0209 09:43:38.319475 1406 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.14.17b22892523e94b5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.14", UID:"10.0.0.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.14 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.14"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 43, 37, 637008565, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 43, 38, 317992365, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.14.17b22892523e94b5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:43:38.410134 kubelet[1406]: E0209 09:43:38.410029 1406 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.14.17b22892523ea7c5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.14", UID:"10.0.0.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.14 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.14"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 43, 37, 637013445, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 43, 38, 318002645, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.14.17b22892523ea7c5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:43:38.606023 kubelet[1406]: E0209 09:43:38.605899 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:43:38.610719 kubelet[1406]: E0209 09:43:38.610632 1406 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.14.17b22892523eb995", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.14", UID:"10.0.0.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.14 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.14"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 43, 37, 637018005, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 43, 38, 318005925, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.14.17b22892523eb995" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:43:38.693474 kubelet[1406]: W0209 09:43:38.693423 1406 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:43:38.693474 kubelet[1406]: E0209 09:43:38.693463 1406 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:43:38.703106 kubelet[1406]: W0209 09:43:38.703076 1406 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 09:43:38.703106 kubelet[1406]: E0209 09:43:38.703101 1406 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 09:43:38.773588 kubelet[1406]: W0209 09:43:38.773560 1406 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.14" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:43:38.773737 kubelet[1406]: E0209 09:43:38.773721 1406 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.14" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:43:38.919661 kubelet[1406]: W0209 09:43:38.919550 1406 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 09:43:38.919661 kubelet[1406]: E0209 09:43:38.919583 1406 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 09:43:39.024199 kubelet[1406]: E0209 09:43:39.024162 1406 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "10.0.0.14" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 09:43:39.120224 kubelet[1406]: I0209 09:43:39.120183 1406 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.14" Feb 9 09:43:39.121580 kubelet[1406]: E0209 09:43:39.121498 1406 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.14.17b22892523e94b5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.14", UID:"10.0.0.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.14 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.14"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 43, 37, 637008565, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 43, 39, 120143805, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.14.17b22892523e94b5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:43:39.121740 kubelet[1406]: E0209 09:43:39.121564 1406 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.14" Feb 9 09:43:39.122408 kubelet[1406]: E0209 09:43:39.122336 1406 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.14.17b22892523ea7c5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.14", UID:"10.0.0.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.14 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.14"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 43, 37, 637013445, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 43, 39, 120154125, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.14.17b22892523ea7c5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:43:39.210094 kubelet[1406]: E0209 09:43:39.209941 1406 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.14.17b22892523eb995", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.14", UID:"10.0.0.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.14 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.14"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 43, 37, 637018005, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 43, 39, 120157125, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.14.17b22892523eb995" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:43:39.607117 kubelet[1406]: E0209 09:43:39.607012 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:43:40.607830 kubelet[1406]: E0209 09:43:40.607795 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:43:40.625931 kubelet[1406]: E0209 09:43:40.625902 1406 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "10.0.0.14" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 09:43:40.686490 kubelet[1406]: W0209 09:43:40.686429 1406 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 09:43:40.686490 kubelet[1406]: E0209 09:43:40.686460 1406 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 09:43:40.723392 kubelet[1406]: I0209 09:43:40.723362 1406 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.14" Feb 9 09:43:40.724482 kubelet[1406]: E0209 09:43:40.724451 1406 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.14" Feb 9 09:43:40.724591 kubelet[1406]: E0209 09:43:40.724418 1406 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.14.17b22892523e94b5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.14", UID:"10.0.0.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.14 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.14"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 43, 37, 637008565, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 43, 40, 723303405, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.14.17b22892523e94b5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:43:40.725490 kubelet[1406]: E0209 09:43:40.725437 1406 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.14.17b22892523ea7c5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.14", UID:"10.0.0.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.14 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.14"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 43, 37, 637013445, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 43, 40, 723316045, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.14.17b22892523ea7c5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:43:40.726327 kubelet[1406]: E0209 09:43:40.726267 1406 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.14.17b22892523eb995", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.14", UID:"10.0.0.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.14 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.14"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 43, 37, 637018005, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 43, 40, 723323125, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.14.17b22892523eb995" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:43:40.889034 kubelet[1406]: W0209 09:43:40.888941 1406 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.14" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:43:40.889340 kubelet[1406]: E0209 09:43:40.889321 1406 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.14" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:43:41.056954 kubelet[1406]: W0209 09:43:41.056915 1406 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 09:43:41.056954 kubelet[1406]: E0209 09:43:41.056948 1406 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 09:43:41.458800 kubelet[1406]: W0209 09:43:41.458769 1406 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:43:41.458989 kubelet[1406]: E0209 09:43:41.458976 1406 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:43:41.608783 kubelet[1406]: E0209 09:43:41.608751 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:43:42.609753 kubelet[1406]: E0209 09:43:42.609705 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:43:43.610673 kubelet[1406]: E0209 09:43:43.610629 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:43:43.828265 kubelet[1406]: E0209 09:43:43.828224 1406 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "10.0.0.14" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 09:43:43.925307 kubelet[1406]: I0209 09:43:43.925224 1406 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.14" Feb 9 09:43:43.926505 kubelet[1406]: E0209 09:43:43.926485 1406 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.14" Feb 9 09:43:43.926620 kubelet[1406]: E0209 09:43:43.926499 1406 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.14.17b22892523e94b5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.14", UID:"10.0.0.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.14 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.14"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 43, 37, 637008565, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 43, 43, 925160965, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.14.17b22892523e94b5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:43:43.927555 kubelet[1406]: E0209 09:43:43.927491 1406 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.14.17b22892523ea7c5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.14", UID:"10.0.0.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.14 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.14"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 43, 37, 637013445, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 43, 43, 925175765, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.14.17b22892523ea7c5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:43:43.928321 kubelet[1406]: E0209 09:43:43.928270 1406 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.14.17b22892523eb995", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.14", UID:"10.0.0.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.14 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.14"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 43, 37, 637018005, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 43, 43, 925178965, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.14.17b22892523eb995" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:43:44.333254 kubelet[1406]: W0209 09:43:44.333147 1406 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 09:43:44.333254 kubelet[1406]: E0209 09:43:44.333177 1406 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 09:43:44.611678 kubelet[1406]: E0209 09:43:44.611582 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:43:44.776707 kubelet[1406]: W0209 09:43:44.776669 1406 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:43:44.776707 kubelet[1406]: E0209 09:43:44.776702 1406 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:43:45.613000 kubelet[1406]: E0209 09:43:45.612947 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:43:45.889654 kubelet[1406]: W0209 09:43:45.889559 1406 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.14" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:43:45.889654 kubelet[1406]: E0209 09:43:45.889594 1406 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.14" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:43:46.613581 kubelet[1406]: E0209 09:43:46.613500 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:43:46.697598 kubelet[1406]: W0209 09:43:46.697556 1406 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 09:43:46.697598 kubelet[1406]: E0209 09:43:46.697591 1406 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 09:43:47.594490 kubelet[1406]: I0209 09:43:47.594425 1406 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 9 09:43:47.613986 kubelet[1406]: E0209 09:43:47.613951 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:43:47.659371 kubelet[1406]: E0209 09:43:47.659344 1406 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.14\" not found" Feb 9 09:43:47.967984 kubelet[1406]: E0209 09:43:47.967599 1406 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.14" not found Feb 9 09:43:48.614590 kubelet[1406]: E0209 09:43:48.614529 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:43:49.022706 kubelet[1406]: E0209 09:43:49.022579 1406 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.14" not found Feb 9 09:43:49.614696 kubelet[1406]: E0209 09:43:49.614644 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:43:50.232256 kubelet[1406]: E0209 09:43:50.232174 1406 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.14\" not found" node="10.0.0.14" Feb 9 09:43:50.328166 kubelet[1406]: I0209 09:43:50.328129 1406 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.14" Feb 9 09:43:50.422638 kubelet[1406]: I0209 09:43:50.422591 1406 kubelet_node_status.go:73] "Successfully registered node" node="10.0.0.14" Feb 9 09:43:50.431557 kubelet[1406]: E0209 09:43:50.431493 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:50.532091 kubelet[1406]: E0209 09:43:50.531971 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:50.615805 kubelet[1406]: E0209 09:43:50.615768 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:43:50.632199 kubelet[1406]: E0209 09:43:50.632104 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:50.732924 kubelet[1406]: E0209 09:43:50.732862 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:50.753661 sudo[1232]: pam_unix(sudo:session): session closed for user root Feb 9 09:43:50.758274 sshd[1229]: pam_unix(sshd:session): session closed for user core Feb 9 09:43:50.760357 systemd[1]: sshd@4-10.0.0.14:22-10.0.0.1:37334.service: Deactivated successfully. Feb 9 09:43:50.761067 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 09:43:50.761702 systemd-logind[1131]: Session 5 logged out. Waiting for processes to exit. Feb 9 09:43:50.762879 systemd-logind[1131]: Removed session 5. Feb 9 09:43:50.833735 kubelet[1406]: E0209 09:43:50.833651 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:50.933998 kubelet[1406]: E0209 09:43:50.933955 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:51.034729 kubelet[1406]: E0209 09:43:51.034696 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:51.135571 kubelet[1406]: E0209 09:43:51.135463 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:51.236638 kubelet[1406]: E0209 09:43:51.236598 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:51.337306 kubelet[1406]: E0209 09:43:51.337261 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:51.438121 kubelet[1406]: E0209 09:43:51.438006 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:51.538666 kubelet[1406]: E0209 09:43:51.538611 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:51.616451 kubelet[1406]: E0209 09:43:51.616408 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:43:51.639708 kubelet[1406]: E0209 09:43:51.639669 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:51.740502 kubelet[1406]: E0209 09:43:51.740391 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:51.841274 kubelet[1406]: E0209 09:43:51.841223 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:51.941987 kubelet[1406]: E0209 09:43:51.941808 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:52.042479 kubelet[1406]: E0209 09:43:52.042364 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:52.143125 kubelet[1406]: E0209 09:43:52.143081 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:52.243786 kubelet[1406]: E0209 09:43:52.243746 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:52.344394 kubelet[1406]: E0209 09:43:52.344290 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:52.444914 kubelet[1406]: E0209 09:43:52.444866 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:52.545509 kubelet[1406]: E0209 09:43:52.545459 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:52.617658 kubelet[1406]: E0209 09:43:52.617537 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:43:52.646021 kubelet[1406]: E0209 09:43:52.645985 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:52.746571 kubelet[1406]: E0209 09:43:52.746534 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:52.847375 kubelet[1406]: E0209 09:43:52.847233 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:52.948012 kubelet[1406]: E0209 09:43:52.947900 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:53.048094 kubelet[1406]: E0209 09:43:53.048034 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:53.148822 kubelet[1406]: E0209 09:43:53.148772 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:53.249719 kubelet[1406]: E0209 09:43:53.249632 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:53.350471 kubelet[1406]: E0209 09:43:53.350436 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:53.451576 kubelet[1406]: E0209 09:43:53.451540 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:53.552424 kubelet[1406]: E0209 09:43:53.552323 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:53.618463 kubelet[1406]: E0209 09:43:53.618431 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:43:53.653008 kubelet[1406]: E0209 09:43:53.652974 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:53.754042 kubelet[1406]: E0209 09:43:53.754005 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:53.855418 kubelet[1406]: E0209 09:43:53.855322 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:53.956270 kubelet[1406]: E0209 09:43:53.956233 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:54.057035 kubelet[1406]: E0209 09:43:54.056998 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:54.158015 kubelet[1406]: E0209 09:43:54.157920 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:54.258574 kubelet[1406]: E0209 09:43:54.258525 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:54.359208 kubelet[1406]: E0209 09:43:54.359170 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:54.459900 kubelet[1406]: E0209 09:43:54.459814 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:54.560548 kubelet[1406]: E0209 09:43:54.560490 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:54.619477 kubelet[1406]: E0209 09:43:54.619414 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:43:54.660694 kubelet[1406]: E0209 09:43:54.660649 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:54.761529 kubelet[1406]: E0209 09:43:54.761445 1406 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Feb 9 09:43:54.862799 kubelet[1406]: I0209 09:43:54.862777 1406 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 9 09:43:54.863322 env[1140]: time="2024-02-09T09:43:54.863231445Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 09:43:54.863637 kubelet[1406]: I0209 09:43:54.863427 1406 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 9 09:43:55.616333 kubelet[1406]: I0209 09:43:55.616293 1406 apiserver.go:52] "Watching apiserver" Feb 9 09:43:55.619287 kubelet[1406]: I0209 09:43:55.619265 1406 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:43:55.619348 kubelet[1406]: I0209 09:43:55.619341 1406 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:43:55.619519 kubelet[1406]: E0209 09:43:55.619503 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:43:55.628046 systemd[1]: Created slice kubepods-burstable-pod086156e2_659d_400f_9b24_ebbcd4ae9dd5.slice. Feb 9 09:43:55.638819 systemd[1]: Created slice kubepods-besteffort-pod888a3bdd_ff7d_4cd9_a5f0_a82f1ea83c77.slice. Feb 9 09:43:55.712804 kubelet[1406]: I0209 09:43:55.712770 1406 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 09:43:55.807651 kubelet[1406]: I0209 09:43:55.807592 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/888a3bdd-ff7d-4cd9-a5f0-a82f1ea83c77-xtables-lock\") pod \"kube-proxy-5b2n2\" (UID: \"888a3bdd-ff7d-4cd9-a5f0-a82f1ea83c77\") " pod="kube-system/kube-proxy-5b2n2" Feb 9 09:43:55.807651 kubelet[1406]: I0209 09:43:55.807635 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/888a3bdd-ff7d-4cd9-a5f0-a82f1ea83c77-lib-modules\") pod \"kube-proxy-5b2n2\" (UID: \"888a3bdd-ff7d-4cd9-a5f0-a82f1ea83c77\") " pod="kube-system/kube-proxy-5b2n2" Feb 9 09:43:55.807651 kubelet[1406]: I0209 09:43:55.807658 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/086156e2-659d-400f-9b24-ebbcd4ae9dd5-etc-cni-netd\") pod \"cilium-xlg6j\" (UID: \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\") " pod="kube-system/cilium-xlg6j" Feb 9 09:43:55.807859 kubelet[1406]: I0209 09:43:55.807677 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/086156e2-659d-400f-9b24-ebbcd4ae9dd5-lib-modules\") pod \"cilium-xlg6j\" (UID: \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\") " pod="kube-system/cilium-xlg6j" Feb 9 09:43:55.807859 kubelet[1406]: I0209 09:43:55.807698 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/086156e2-659d-400f-9b24-ebbcd4ae9dd5-clustermesh-secrets\") pod \"cilium-xlg6j\" (UID: \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\") " pod="kube-system/cilium-xlg6j" Feb 9 09:43:55.807859 kubelet[1406]: I0209 09:43:55.807718 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6skt\" (UniqueName: \"kubernetes.io/projected/086156e2-659d-400f-9b24-ebbcd4ae9dd5-kube-api-access-v6skt\") pod \"cilium-xlg6j\" (UID: \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\") " pod="kube-system/cilium-xlg6j" Feb 9 09:43:55.807859 kubelet[1406]: I0209 09:43:55.807736 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/888a3bdd-ff7d-4cd9-a5f0-a82f1ea83c77-kube-proxy\") pod \"kube-proxy-5b2n2\" (UID: \"888a3bdd-ff7d-4cd9-a5f0-a82f1ea83c77\") " pod="kube-system/kube-proxy-5b2n2" Feb 9 09:43:55.807859 kubelet[1406]: I0209 09:43:55.807755 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/086156e2-659d-400f-9b24-ebbcd4ae9dd5-hostproc\") pod \"cilium-xlg6j\" (UID: \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\") " pod="kube-system/cilium-xlg6j" Feb 9 09:43:55.807859 kubelet[1406]: I0209 09:43:55.807776 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/086156e2-659d-400f-9b24-ebbcd4ae9dd5-cilium-cgroup\") pod \"cilium-xlg6j\" (UID: \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\") " pod="kube-system/cilium-xlg6j" Feb 9 09:43:55.807995 kubelet[1406]: I0209 09:43:55.807794 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/086156e2-659d-400f-9b24-ebbcd4ae9dd5-cni-path\") pod \"cilium-xlg6j\" (UID: \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\") " pod="kube-system/cilium-xlg6j" Feb 9 09:43:55.807995 kubelet[1406]: I0209 09:43:55.807811 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/086156e2-659d-400f-9b24-ebbcd4ae9dd5-xtables-lock\") pod \"cilium-xlg6j\" (UID: \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\") " pod="kube-system/cilium-xlg6j" Feb 9 09:43:55.807995 kubelet[1406]: I0209 09:43:55.807829 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/086156e2-659d-400f-9b24-ebbcd4ae9dd5-hubble-tls\") pod \"cilium-xlg6j\" (UID: \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\") " pod="kube-system/cilium-xlg6j" Feb 9 09:43:55.807995 kubelet[1406]: I0209 09:43:55.807848 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c5hm\" (UniqueName: \"kubernetes.io/projected/888a3bdd-ff7d-4cd9-a5f0-a82f1ea83c77-kube-api-access-5c5hm\") pod \"kube-proxy-5b2n2\" (UID: \"888a3bdd-ff7d-4cd9-a5f0-a82f1ea83c77\") " pod="kube-system/kube-proxy-5b2n2" Feb 9 09:43:55.807995 kubelet[1406]: I0209 09:43:55.807866 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/086156e2-659d-400f-9b24-ebbcd4ae9dd5-cilium-run\") pod \"cilium-xlg6j\" (UID: \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\") " pod="kube-system/cilium-xlg6j" Feb 9 09:43:55.807995 kubelet[1406]: I0209 09:43:55.807884 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/086156e2-659d-400f-9b24-ebbcd4ae9dd5-bpf-maps\") pod \"cilium-xlg6j\" (UID: \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\") " pod="kube-system/cilium-xlg6j" Feb 9 09:43:55.808112 kubelet[1406]: I0209 09:43:55.807903 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/086156e2-659d-400f-9b24-ebbcd4ae9dd5-cilium-config-path\") pod \"cilium-xlg6j\" (UID: \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\") " pod="kube-system/cilium-xlg6j" Feb 9 09:43:55.808112 kubelet[1406]: I0209 09:43:55.807924 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/086156e2-659d-400f-9b24-ebbcd4ae9dd5-host-proc-sys-net\") pod \"cilium-xlg6j\" (UID: \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\") " pod="kube-system/cilium-xlg6j" Feb 9 09:43:55.808112 kubelet[1406]: I0209 09:43:55.807943 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/086156e2-659d-400f-9b24-ebbcd4ae9dd5-host-proc-sys-kernel\") pod \"cilium-xlg6j\" (UID: \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\") " pod="kube-system/cilium-xlg6j" Feb 9 09:43:55.808112 kubelet[1406]: I0209 09:43:55.807951 1406 reconciler.go:41] "Reconciler: start to sync state" Feb 9 09:43:55.952124 kubelet[1406]: E0209 09:43:55.952040 1406 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:55.953289 env[1140]: time="2024-02-09T09:43:55.953068005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5b2n2,Uid:888a3bdd-ff7d-4cd9-a5f0-a82f1ea83c77,Namespace:kube-system,Attempt:0,}" Feb 9 09:43:56.239115 kubelet[1406]: E0209 09:43:56.238906 1406 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:56.239406 env[1140]: time="2024-02-09T09:43:56.239357485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xlg6j,Uid:086156e2-659d-400f-9b24-ebbcd4ae9dd5,Namespace:kube-system,Attempt:0,}" Feb 9 09:43:56.499392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount575430438.mount: Deactivated successfully. Feb 9 09:43:56.506273 env[1140]: time="2024-02-09T09:43:56.506232685Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:56.507625 env[1140]: time="2024-02-09T09:43:56.507599165Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:56.509674 env[1140]: time="2024-02-09T09:43:56.509625365Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:56.511409 env[1140]: time="2024-02-09T09:43:56.511373645Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:56.513049 env[1140]: time="2024-02-09T09:43:56.513020165Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:56.514570 env[1140]: time="2024-02-09T09:43:56.514541245Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:56.516126 env[1140]: time="2024-02-09T09:43:56.516096725Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:56.518584 env[1140]: time="2024-02-09T09:43:56.518558885Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:56.549163 env[1140]: time="2024-02-09T09:43:56.549073405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:43:56.549163 env[1140]: time="2024-02-09T09:43:56.549116805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:43:56.549163 env[1140]: time="2024-02-09T09:43:56.549127165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:43:56.549461 env[1140]: time="2024-02-09T09:43:56.549397525Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c20b4c2e87241269206585f59d7f5cff6da60decd524bee5a2d10acf0b625ae5 pid=1505 runtime=io.containerd.runc.v2 Feb 9 09:43:56.550268 env[1140]: time="2024-02-09T09:43:56.550207405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:43:56.550357 env[1140]: time="2024-02-09T09:43:56.550284605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:43:56.550357 env[1140]: time="2024-02-09T09:43:56.550311685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:43:56.550515 env[1140]: time="2024-02-09T09:43:56.550479405Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/61aabf117e4560d74dea4ba061949279dd4f08e79893ab20d1f7fe36b742aae8 pid=1504 runtime=io.containerd.runc.v2 Feb 9 09:43:56.581746 systemd[1]: Started cri-containerd-61aabf117e4560d74dea4ba061949279dd4f08e79893ab20d1f7fe36b742aae8.scope. Feb 9 09:43:56.584276 systemd[1]: Started cri-containerd-c20b4c2e87241269206585f59d7f5cff6da60decd524bee5a2d10acf0b625ae5.scope. Feb 9 09:43:56.621447 kubelet[1406]: E0209 09:43:56.621393 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:43:56.621851 env[1140]: time="2024-02-09T09:43:56.621808485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xlg6j,Uid:086156e2-659d-400f-9b24-ebbcd4ae9dd5,Namespace:kube-system,Attempt:0,} returns sandbox id \"c20b4c2e87241269206585f59d7f5cff6da60decd524bee5a2d10acf0b625ae5\"" Feb 9 09:43:56.623057 kubelet[1406]: E0209 09:43:56.622826 1406 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:56.623927 env[1140]: time="2024-02-09T09:43:56.623754965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5b2n2,Uid:888a3bdd-ff7d-4cd9-a5f0-a82f1ea83c77,Namespace:kube-system,Attempt:0,} returns sandbox id \"61aabf117e4560d74dea4ba061949279dd4f08e79893ab20d1f7fe36b742aae8\"" Feb 9 09:43:56.624169 kubelet[1406]: E0209 09:43:56.624139 1406 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:56.624618 env[1140]: time="2024-02-09T09:43:56.624588685Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 09:43:57.605865 kubelet[1406]: E0209 09:43:57.605815 1406 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:43:57.622120 kubelet[1406]: E0209 09:43:57.622089 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:43:58.622661 kubelet[1406]: E0209 09:43:58.622602 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:43:59.623302 kubelet[1406]: E0209 09:43:59.623261 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:43:59.814963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1862501297.mount: Deactivated successfully. Feb 9 09:44:00.624181 kubelet[1406]: E0209 09:44:00.624143 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:01.624475 kubelet[1406]: E0209 09:44:01.624425 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:02.039079 env[1140]: time="2024-02-09T09:44:02.038963645Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:44:02.040251 env[1140]: time="2024-02-09T09:44:02.040225085Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:44:02.042400 env[1140]: time="2024-02-09T09:44:02.042360605Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:44:02.042898 env[1140]: time="2024-02-09T09:44:02.042863325Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 9 09:44:02.044736 env[1140]: time="2024-02-09T09:44:02.044702805Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 09:44:02.044935 env[1140]: time="2024-02-09T09:44:02.044808285Z" level=info msg="CreateContainer within sandbox \"c20b4c2e87241269206585f59d7f5cff6da60decd524bee5a2d10acf0b625ae5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 09:44:02.055983 env[1140]: time="2024-02-09T09:44:02.055929085Z" level=info msg="CreateContainer within sandbox \"c20b4c2e87241269206585f59d7f5cff6da60decd524bee5a2d10acf0b625ae5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1259c776f1c174aa305a94af0da7fdefc428a61e126634d90d541e11a56d4ea6\"" Feb 9 09:44:02.056688 env[1140]: time="2024-02-09T09:44:02.056641085Z" level=info msg="StartContainer for \"1259c776f1c174aa305a94af0da7fdefc428a61e126634d90d541e11a56d4ea6\"" Feb 9 09:44:02.073445 systemd[1]: Started cri-containerd-1259c776f1c174aa305a94af0da7fdefc428a61e126634d90d541e11a56d4ea6.scope. Feb 9 09:44:02.112639 env[1140]: time="2024-02-09T09:44:02.112588765Z" level=info msg="StartContainer for \"1259c776f1c174aa305a94af0da7fdefc428a61e126634d90d541e11a56d4ea6\" returns successfully" Feb 9 09:44:02.141735 systemd[1]: cri-containerd-1259c776f1c174aa305a94af0da7fdefc428a61e126634d90d541e11a56d4ea6.scope: Deactivated successfully. Feb 9 09:44:02.218404 env[1140]: time="2024-02-09T09:44:02.218352245Z" level=info msg="shim disconnected" id=1259c776f1c174aa305a94af0da7fdefc428a61e126634d90d541e11a56d4ea6 Feb 9 09:44:02.218404 env[1140]: time="2024-02-09T09:44:02.218398605Z" level=warning msg="cleaning up after shim disconnected" id=1259c776f1c174aa305a94af0da7fdefc428a61e126634d90d541e11a56d4ea6 namespace=k8s.io Feb 9 09:44:02.218404 env[1140]: time="2024-02-09T09:44:02.218407765Z" level=info msg="cleaning up dead shim" Feb 9 09:44:02.224819 env[1140]: time="2024-02-09T09:44:02.224782965Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:44:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1622 runtime=io.containerd.runc.v2\n" Feb 9 09:44:02.624698 kubelet[1406]: E0209 09:44:02.624663 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:02.792434 kubelet[1406]: E0209 09:44:02.792277 1406 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:02.794028 env[1140]: time="2024-02-09T09:44:02.793987965Z" level=info msg="CreateContainer within sandbox \"c20b4c2e87241269206585f59d7f5cff6da60decd524bee5a2d10acf0b625ae5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 09:44:02.809203 env[1140]: time="2024-02-09T09:44:02.809133165Z" level=info msg="CreateContainer within sandbox \"c20b4c2e87241269206585f59d7f5cff6da60decd524bee5a2d10acf0b625ae5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"954df0b20cb20984070ca7067698245d301fcd0963811da02a1ef95a8aefcf55\"" Feb 9 09:44:02.809982 env[1140]: time="2024-02-09T09:44:02.809940605Z" level=info msg="StartContainer for \"954df0b20cb20984070ca7067698245d301fcd0963811da02a1ef95a8aefcf55\"" Feb 9 09:44:02.823830 systemd[1]: Started cri-containerd-954df0b20cb20984070ca7067698245d301fcd0963811da02a1ef95a8aefcf55.scope. Feb 9 09:44:02.870379 env[1140]: time="2024-02-09T09:44:02.870331645Z" level=info msg="StartContainer for \"954df0b20cb20984070ca7067698245d301fcd0963811da02a1ef95a8aefcf55\" returns successfully" Feb 9 09:44:02.877577 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 09:44:02.877780 systemd[1]: Stopped systemd-sysctl.service. Feb 9 09:44:02.878457 systemd[1]: Stopping systemd-sysctl.service... Feb 9 09:44:02.879873 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:44:02.880110 systemd[1]: cri-containerd-954df0b20cb20984070ca7067698245d301fcd0963811da02a1ef95a8aefcf55.scope: Deactivated successfully. Feb 9 09:44:02.886976 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:44:02.912852 env[1140]: time="2024-02-09T09:44:02.912802605Z" level=info msg="shim disconnected" id=954df0b20cb20984070ca7067698245d301fcd0963811da02a1ef95a8aefcf55 Feb 9 09:44:02.912852 env[1140]: time="2024-02-09T09:44:02.912852605Z" level=warning msg="cleaning up after shim disconnected" id=954df0b20cb20984070ca7067698245d301fcd0963811da02a1ef95a8aefcf55 namespace=k8s.io Feb 9 09:44:02.913052 env[1140]: time="2024-02-09T09:44:02.912861965Z" level=info msg="cleaning up dead shim" Feb 9 09:44:02.919606 env[1140]: time="2024-02-09T09:44:02.919556125Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:44:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1685 runtime=io.containerd.runc.v2\n" Feb 9 09:44:03.052377 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1259c776f1c174aa305a94af0da7fdefc428a61e126634d90d541e11a56d4ea6-rootfs.mount: Deactivated successfully. Feb 9 09:44:03.099146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1056559160.mount: Deactivated successfully. Feb 9 09:44:03.441698 env[1140]: time="2024-02-09T09:44:03.441648492Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:44:03.442859 env[1140]: time="2024-02-09T09:44:03.442822495Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:44:03.444226 env[1140]: time="2024-02-09T09:44:03.444191979Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:44:03.445467 env[1140]: time="2024-02-09T09:44:03.445440782Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:44:03.445920 env[1140]: time="2024-02-09T09:44:03.445898104Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926\"" Feb 9 09:44:03.447672 env[1140]: time="2024-02-09T09:44:03.447640268Z" level=info msg="CreateContainer within sandbox \"61aabf117e4560d74dea4ba061949279dd4f08e79893ab20d1f7fe36b742aae8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 09:44:03.458858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1190881194.mount: Deactivated successfully. Feb 9 09:44:03.463207 env[1140]: time="2024-02-09T09:44:03.463142392Z" level=info msg="CreateContainer within sandbox \"61aabf117e4560d74dea4ba061949279dd4f08e79893ab20d1f7fe36b742aae8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2b239f8ea522301126353e40128388a649d6e32936c6b2888b6e219d46297b83\"" Feb 9 09:44:03.463884 env[1140]: time="2024-02-09T09:44:03.463804274Z" level=info msg="StartContainer for \"2b239f8ea522301126353e40128388a649d6e32936c6b2888b6e219d46297b83\"" Feb 9 09:44:03.478110 systemd[1]: Started cri-containerd-2b239f8ea522301126353e40128388a649d6e32936c6b2888b6e219d46297b83.scope. Feb 9 09:44:03.517050 env[1140]: time="2024-02-09T09:44:03.517004345Z" level=info msg="StartContainer for \"2b239f8ea522301126353e40128388a649d6e32936c6b2888b6e219d46297b83\" returns successfully" Feb 9 09:44:03.625741 kubelet[1406]: E0209 09:44:03.625691 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:03.794966 kubelet[1406]: E0209 09:44:03.794847 1406 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:03.796488 kubelet[1406]: E0209 09:44:03.796348 1406 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:03.798293 env[1140]: time="2024-02-09T09:44:03.798252661Z" level=info msg="CreateContainer within sandbox \"c20b4c2e87241269206585f59d7f5cff6da60decd524bee5a2d10acf0b625ae5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 09:44:03.802916 kubelet[1406]: I0209 09:44:03.802885 1406 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-5b2n2" podStartSLOduration=-9.223372023051931e+09 pod.CreationTimestamp="2024-02-09 09:43:50 +0000 UTC" firstStartedPulling="2024-02-09 09:43:56.624642765 +0000 UTC m=+19.882219561" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:44:03.802432153 +0000 UTC m=+27.060008989" watchObservedRunningTime="2024-02-09 09:44:03.802844114 +0000 UTC m=+27.060420950" Feb 9 09:44:03.810227 env[1140]: time="2024-02-09T09:44:03.810175415Z" level=info msg="CreateContainer within sandbox \"c20b4c2e87241269206585f59d7f5cff6da60decd524bee5a2d10acf0b625ae5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"64eee29ceb23a66122f663f0ed455cce0d95abddc6b18774c20addc67301c7ce\"" Feb 9 09:44:03.810771 env[1140]: time="2024-02-09T09:44:03.810745096Z" level=info msg="StartContainer for \"64eee29ceb23a66122f663f0ed455cce0d95abddc6b18774c20addc67301c7ce\"" Feb 9 09:44:03.826447 systemd[1]: Started cri-containerd-64eee29ceb23a66122f663f0ed455cce0d95abddc6b18774c20addc67301c7ce.scope. Feb 9 09:44:03.863624 env[1140]: time="2024-02-09T09:44:03.863573166Z" level=info msg="StartContainer for \"64eee29ceb23a66122f663f0ed455cce0d95abddc6b18774c20addc67301c7ce\" returns successfully" Feb 9 09:44:03.872702 systemd[1]: cri-containerd-64eee29ceb23a66122f663f0ed455cce0d95abddc6b18774c20addc67301c7ce.scope: Deactivated successfully. Feb 9 09:44:03.961035 env[1140]: time="2024-02-09T09:44:03.960987282Z" level=info msg="shim disconnected" id=64eee29ceb23a66122f663f0ed455cce0d95abddc6b18774c20addc67301c7ce Feb 9 09:44:03.961035 env[1140]: time="2024-02-09T09:44:03.961035682Z" level=warning msg="cleaning up after shim disconnected" id=64eee29ceb23a66122f663f0ed455cce0d95abddc6b18774c20addc67301c7ce namespace=k8s.io Feb 9 09:44:03.961277 env[1140]: time="2024-02-09T09:44:03.961048722Z" level=info msg="cleaning up dead shim" Feb 9 09:44:03.967648 env[1140]: time="2024-02-09T09:44:03.967615421Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:44:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1890 runtime=io.containerd.runc.v2\n" Feb 9 09:44:04.625851 kubelet[1406]: E0209 09:44:04.625813 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:04.799341 kubelet[1406]: E0209 09:44:04.799306 1406 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:04.799519 kubelet[1406]: E0209 09:44:04.799306 1406 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:04.801180 env[1140]: time="2024-02-09T09:44:04.801141759Z" level=info msg="CreateContainer within sandbox \"c20b4c2e87241269206585f59d7f5cff6da60decd524bee5a2d10acf0b625ae5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 09:44:04.812272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3229912103.mount: Deactivated successfully. Feb 9 09:44:04.817546 env[1140]: time="2024-02-09T09:44:04.817498163Z" level=info msg="CreateContainer within sandbox \"c20b4c2e87241269206585f59d7f5cff6da60decd524bee5a2d10acf0b625ae5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b2d3f9594eeb4685ab701341cc131f95ae69e7ccc26c7c01be9bb31ed4a2bbc8\"" Feb 9 09:44:04.817982 env[1140]: time="2024-02-09T09:44:04.817916164Z" level=info msg="StartContainer for \"b2d3f9594eeb4685ab701341cc131f95ae69e7ccc26c7c01be9bb31ed4a2bbc8\"" Feb 9 09:44:04.832848 systemd[1]: Started cri-containerd-b2d3f9594eeb4685ab701341cc131f95ae69e7ccc26c7c01be9bb31ed4a2bbc8.scope. Feb 9 09:44:04.862390 systemd[1]: cri-containerd-b2d3f9594eeb4685ab701341cc131f95ae69e7ccc26c7c01be9bb31ed4a2bbc8.scope: Deactivated successfully. Feb 9 09:44:04.864352 env[1140]: time="2024-02-09T09:44:04.864254167Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod086156e2_659d_400f_9b24_ebbcd4ae9dd5.slice/cri-containerd-b2d3f9594eeb4685ab701341cc131f95ae69e7ccc26c7c01be9bb31ed4a2bbc8.scope/memory.events\": no such file or directory" Feb 9 09:44:04.865244 env[1140]: time="2024-02-09T09:44:04.865207609Z" level=info msg="StartContainer for \"b2d3f9594eeb4685ab701341cc131f95ae69e7ccc26c7c01be9bb31ed4a2bbc8\" returns successfully" Feb 9 09:44:04.885324 env[1140]: time="2024-02-09T09:44:04.885222942Z" level=info msg="shim disconnected" id=b2d3f9594eeb4685ab701341cc131f95ae69e7ccc26c7c01be9bb31ed4a2bbc8 Feb 9 09:44:04.885518 env[1140]: time="2024-02-09T09:44:04.885499143Z" level=warning msg="cleaning up after shim disconnected" id=b2d3f9594eeb4685ab701341cc131f95ae69e7ccc26c7c01be9bb31ed4a2bbc8 namespace=k8s.io Feb 9 09:44:04.885598 env[1140]: time="2024-02-09T09:44:04.885583823Z" level=info msg="cleaning up dead shim" Feb 9 09:44:04.893875 env[1140]: time="2024-02-09T09:44:04.893835845Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:44:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1944 runtime=io.containerd.runc.v2\n" Feb 9 09:44:05.626928 kubelet[1406]: E0209 09:44:05.626868 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:05.802662 kubelet[1406]: E0209 09:44:05.802636 1406 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:05.804669 env[1140]: time="2024-02-09T09:44:05.804619570Z" level=info msg="CreateContainer within sandbox \"c20b4c2e87241269206585f59d7f5cff6da60decd524bee5a2d10acf0b625ae5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 09:44:05.822325 env[1140]: time="2024-02-09T09:44:05.822266414Z" level=info msg="CreateContainer within sandbox \"c20b4c2e87241269206585f59d7f5cff6da60decd524bee5a2d10acf0b625ae5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bcb16267d7a630ff8860889bba06834111fce4b82e9b56f54ed1bae4492b4be9\"" Feb 9 09:44:05.822773 env[1140]: time="2024-02-09T09:44:05.822738655Z" level=info msg="StartContainer for \"bcb16267d7a630ff8860889bba06834111fce4b82e9b56f54ed1bae4492b4be9\"" Feb 9 09:44:05.841513 systemd[1]: Started cri-containerd-bcb16267d7a630ff8860889bba06834111fce4b82e9b56f54ed1bae4492b4be9.scope. Feb 9 09:44:05.884893 env[1140]: time="2024-02-09T09:44:05.884773409Z" level=info msg="StartContainer for \"bcb16267d7a630ff8860889bba06834111fce4b82e9b56f54ed1bae4492b4be9\" returns successfully" Feb 9 09:44:06.046145 kubelet[1406]: I0209 09:44:06.046100 1406 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 09:44:06.156217 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 09:44:06.444215 kernel: Initializing XFRM netlink socket Feb 9 09:44:06.447214 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 09:44:06.627319 kubelet[1406]: E0209 09:44:06.627251 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:06.806919 kubelet[1406]: E0209 09:44:06.806829 1406 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:07.475686 kubelet[1406]: I0209 09:44:07.475640 1406 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-xlg6j" podStartSLOduration=-9.223372019379175e+09 pod.CreationTimestamp="2024-02-09 09:43:50 +0000 UTC" firstStartedPulling="2024-02-09 09:43:56.623612485 +0000 UTC m=+19.881189321" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:44:06.843379744 +0000 UTC m=+30.100956580" watchObservedRunningTime="2024-02-09 09:44:07.475600632 +0000 UTC m=+30.733177428" Feb 9 09:44:07.475870 kubelet[1406]: I0209 09:44:07.475825 1406 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:44:07.484310 systemd[1]: Created slice kubepods-besteffort-pod1812afa1_f6c0_4a11_a8f5_5ebd9a3e8461.slice. Feb 9 09:44:07.628014 kubelet[1406]: E0209 09:44:07.627968 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:07.672782 kubelet[1406]: I0209 09:44:07.672736 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sglcl\" (UniqueName: \"kubernetes.io/projected/1812afa1-f6c0-4a11-a8f5-5ebd9a3e8461-kube-api-access-sglcl\") pod \"nginx-deployment-8ffc5cf85-bfrwp\" (UID: \"1812afa1-f6c0-4a11-a8f5-5ebd9a3e8461\") " pod="default/nginx-deployment-8ffc5cf85-bfrwp" Feb 9 09:44:07.787510 env[1140]: time="2024-02-09T09:44:07.787410834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-bfrwp,Uid:1812afa1-f6c0-4a11-a8f5-5ebd9a3e8461,Namespace:default,Attempt:0,}" Feb 9 09:44:07.808110 kubelet[1406]: E0209 09:44:07.808085 1406 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:08.068960 systemd-networkd[1055]: cilium_host: Link UP Feb 9 09:44:08.069061 systemd-networkd[1055]: cilium_net: Link UP Feb 9 09:44:08.069955 systemd-networkd[1055]: cilium_net: Gained carrier Feb 9 09:44:08.070209 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 09:44:08.070247 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 09:44:08.070639 systemd-networkd[1055]: cilium_host: Gained carrier Feb 9 09:44:08.100258 systemd-networkd[1055]: cilium_host: Gained IPv6LL Feb 9 09:44:08.144798 systemd-networkd[1055]: cilium_vxlan: Link UP Feb 9 09:44:08.144804 systemd-networkd[1055]: cilium_vxlan: Gained carrier Feb 9 09:44:08.332378 systemd-networkd[1055]: cilium_net: Gained IPv6LL Feb 9 09:44:08.438220 kernel: NET: Registered PF_ALG protocol family Feb 9 09:44:08.628687 kubelet[1406]: E0209 09:44:08.628585 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:08.809446 kubelet[1406]: E0209 09:44:08.809419 1406 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:09.050214 systemd-networkd[1055]: lxc_health: Link UP Feb 9 09:44:09.067337 systemd-networkd[1055]: lxc_health: Gained carrier Feb 9 09:44:09.068218 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 09:44:09.211390 systemd-networkd[1055]: cilium_vxlan: Gained IPv6LL Feb 9 09:44:09.347703 systemd-networkd[1055]: lxcf3a6bc34dc0d: Link UP Feb 9 09:44:09.360229 kernel: eth0: renamed from tmpdbc8e Feb 9 09:44:09.369743 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:44:09.369849 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf3a6bc34dc0d: link becomes ready Feb 9 09:44:09.369956 systemd-networkd[1055]: lxcf3a6bc34dc0d: Gained carrier Feb 9 09:44:09.629433 kubelet[1406]: E0209 09:44:09.629323 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:10.241396 kubelet[1406]: E0209 09:44:10.241353 1406 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:10.555394 systemd-networkd[1055]: lxcf3a6bc34dc0d: Gained IPv6LL Feb 9 09:44:10.630397 kubelet[1406]: E0209 09:44:10.630343 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:10.811410 systemd-networkd[1055]: lxc_health: Gained IPv6LL Feb 9 09:44:11.630902 kubelet[1406]: E0209 09:44:11.630841 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:12.631236 kubelet[1406]: E0209 09:44:12.631179 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:12.824240 env[1140]: time="2024-02-09T09:44:12.824153511Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:44:12.824240 env[1140]: time="2024-02-09T09:44:12.824208991Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:44:12.824601 env[1140]: time="2024-02-09T09:44:12.824219351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:44:12.824865 env[1140]: time="2024-02-09T09:44:12.824753432Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dbc8e6e3d8308f777058f7e617fae0c57ce6cc7f8e40794e75a6e65a72783847 pid=2481 runtime=io.containerd.runc.v2 Feb 9 09:44:12.838547 systemd[1]: Started cri-containerd-dbc8e6e3d8308f777058f7e617fae0c57ce6cc7f8e40794e75a6e65a72783847.scope. Feb 9 09:44:12.882358 systemd-resolved[1087]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 09:44:12.900634 env[1140]: time="2024-02-09T09:44:12.900593672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-bfrwp,Uid:1812afa1-f6c0-4a11-a8f5-5ebd9a3e8461,Namespace:default,Attempt:0,} returns sandbox id \"dbc8e6e3d8308f777058f7e617fae0c57ce6cc7f8e40794e75a6e65a72783847\"" Feb 9 09:44:12.902225 env[1140]: time="2024-02-09T09:44:12.902176794Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 09:44:13.631924 kubelet[1406]: E0209 09:44:13.631883 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:14.632551 kubelet[1406]: E0209 09:44:14.632505 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:15.424814 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2859186946.mount: Deactivated successfully. Feb 9 09:44:15.439293 update_engine[1133]: I0209 09:44:15.439232 1133 update_attempter.cc:509] Updating boot flags... Feb 9 09:44:15.633500 kubelet[1406]: E0209 09:44:15.633430 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:16.153941 env[1140]: time="2024-02-09T09:44:16.153865361Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:44:16.156022 env[1140]: time="2024-02-09T09:44:16.155989284Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:44:16.157268 env[1140]: time="2024-02-09T09:44:16.157238005Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:44:16.158984 env[1140]: time="2024-02-09T09:44:16.158948047Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:44:16.160486 env[1140]: time="2024-02-09T09:44:16.160445569Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\"" Feb 9 09:44:16.161863 env[1140]: time="2024-02-09T09:44:16.161812611Z" level=info msg="CreateContainer within sandbox \"dbc8e6e3d8308f777058f7e617fae0c57ce6cc7f8e40794e75a6e65a72783847\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 9 09:44:16.170931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount727296050.mount: Deactivated successfully. Feb 9 09:44:16.174716 env[1140]: time="2024-02-09T09:44:16.174668827Z" level=info msg="CreateContainer within sandbox \"dbc8e6e3d8308f777058f7e617fae0c57ce6cc7f8e40794e75a6e65a72783847\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"276731a8d7169a2cd4c31d68e81e6ccf1e1e175666549dc0ae25e8dff356cc7a\"" Feb 9 09:44:16.175059 env[1140]: time="2024-02-09T09:44:16.175030067Z" level=info msg="StartContainer for \"276731a8d7169a2cd4c31d68e81e6ccf1e1e175666549dc0ae25e8dff356cc7a\"" Feb 9 09:44:16.190793 systemd[1]: Started cri-containerd-276731a8d7169a2cd4c31d68e81e6ccf1e1e175666549dc0ae25e8dff356cc7a.scope. Feb 9 09:44:16.227755 env[1140]: time="2024-02-09T09:44:16.227672732Z" level=info msg="StartContainer for \"276731a8d7169a2cd4c31d68e81e6ccf1e1e175666549dc0ae25e8dff356cc7a\" returns successfully" Feb 9 09:44:16.634278 kubelet[1406]: E0209 09:44:16.634234 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:16.831087 kubelet[1406]: I0209 09:44:16.831044 1406 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-8ffc5cf85-bfrwp" podStartSLOduration=-9.223372027023764e+09 pod.CreationTimestamp="2024-02-09 09:44:07 +0000 UTC" firstStartedPulling="2024-02-09 09:44:12.901770994 +0000 UTC m=+36.159347830" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:44:16.830421629 +0000 UTC m=+40.087998465" watchObservedRunningTime="2024-02-09 09:44:16.83101207 +0000 UTC m=+40.088588906" Feb 9 09:44:17.605843 kubelet[1406]: E0209 09:44:17.605799 1406 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:17.635171 kubelet[1406]: E0209 09:44:17.635131 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:18.316803 kubelet[1406]: I0209 09:44:18.316765 1406 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 9 09:44:18.317538 kubelet[1406]: E0209 09:44:18.317505 1406 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:18.636125 kubelet[1406]: E0209 09:44:18.636014 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:18.826802 kubelet[1406]: E0209 09:44:18.826773 1406 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:19.092337 kubelet[1406]: I0209 09:44:19.092292 1406 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:44:19.097089 systemd[1]: Created slice kubepods-besteffort-pod74c9b595_a3bc_4e3f_825c_1f00ae229edf.slice. Feb 9 09:44:19.145602 kubelet[1406]: I0209 09:44:19.145557 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqqg5\" (UniqueName: \"kubernetes.io/projected/74c9b595-a3bc-4e3f-825c-1f00ae229edf-kube-api-access-kqqg5\") pod \"nfs-server-provisioner-0\" (UID: \"74c9b595-a3bc-4e3f-825c-1f00ae229edf\") " pod="default/nfs-server-provisioner-0" Feb 9 09:44:19.145602 kubelet[1406]: I0209 09:44:19.145602 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/74c9b595-a3bc-4e3f-825c-1f00ae229edf-data\") pod \"nfs-server-provisioner-0\" (UID: \"74c9b595-a3bc-4e3f-825c-1f00ae229edf\") " pod="default/nfs-server-provisioner-0" Feb 9 09:44:19.399842 env[1140]: time="2024-02-09T09:44:19.399738265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:74c9b595-a3bc-4e3f-825c-1f00ae229edf,Namespace:default,Attempt:0,}" Feb 9 09:44:19.433223 systemd-networkd[1055]: lxcbf60d4ace02e: Link UP Feb 9 09:44:19.457224 kernel: eth0: renamed from tmp7753d Feb 9 09:44:19.464270 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:44:19.464339 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcbf60d4ace02e: link becomes ready Feb 9 09:44:19.464364 systemd-networkd[1055]: lxcbf60d4ace02e: Gained carrier Feb 9 09:44:19.636608 kubelet[1406]: E0209 09:44:19.636555 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:19.669001 env[1140]: time="2024-02-09T09:44:19.668570816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:44:19.669001 env[1140]: time="2024-02-09T09:44:19.668620176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:44:19.669352 env[1140]: time="2024-02-09T09:44:19.669294736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:44:19.669705 env[1140]: time="2024-02-09T09:44:19.669655417Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7753da57ff200829f991aac83b405587cfad9c3b22e60662801266f2f1870342 pid=2671 runtime=io.containerd.runc.v2 Feb 9 09:44:19.683561 systemd[1]: Started cri-containerd-7753da57ff200829f991aac83b405587cfad9c3b22e60662801266f2f1870342.scope. Feb 9 09:44:19.707854 systemd-resolved[1087]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 09:44:19.723518 env[1140]: time="2024-02-09T09:44:19.723468791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:74c9b595-a3bc-4e3f-825c-1f00ae229edf,Namespace:default,Attempt:0,} returns sandbox id \"7753da57ff200829f991aac83b405587cfad9c3b22e60662801266f2f1870342\"" Feb 9 09:44:19.725060 env[1140]: time="2024-02-09T09:44:19.725036152Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 9 09:44:20.637486 kubelet[1406]: E0209 09:44:20.637434 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:20.987378 systemd-networkd[1055]: lxcbf60d4ace02e: Gained IPv6LL Feb 9 09:44:21.638372 kubelet[1406]: E0209 09:44:21.638338 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:22.638695 kubelet[1406]: E0209 09:44:22.638645 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:22.760849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount673900534.mount: Deactivated successfully. Feb 9 09:44:23.639257 kubelet[1406]: E0209 09:44:23.639178 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:24.472896 env[1140]: time="2024-02-09T09:44:24.472842696Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:44:24.475370 env[1140]: time="2024-02-09T09:44:24.475334618Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:44:24.476960 env[1140]: time="2024-02-09T09:44:24.476933139Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:44:24.479174 env[1140]: time="2024-02-09T09:44:24.479140701Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:44:24.480285 env[1140]: time="2024-02-09T09:44:24.480238062Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Feb 9 09:44:24.482168 env[1140]: time="2024-02-09T09:44:24.482124663Z" level=info msg="CreateContainer within sandbox \"7753da57ff200829f991aac83b405587cfad9c3b22e60662801266f2f1870342\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 9 09:44:24.490774 env[1140]: time="2024-02-09T09:44:24.490740589Z" level=info msg="CreateContainer within sandbox \"7753da57ff200829f991aac83b405587cfad9c3b22e60662801266f2f1870342\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"a50e1859c4f1fbb566a178f7025c43685caa916402266073f5367aa7a0ab0734\"" Feb 9 09:44:24.491283 env[1140]: time="2024-02-09T09:44:24.491257390Z" level=info msg="StartContainer for \"a50e1859c4f1fbb566a178f7025c43685caa916402266073f5367aa7a0ab0734\"" Feb 9 09:44:24.508195 systemd[1]: Started cri-containerd-a50e1859c4f1fbb566a178f7025c43685caa916402266073f5367aa7a0ab0734.scope. Feb 9 09:44:24.545131 env[1140]: time="2024-02-09T09:44:24.542097267Z" level=info msg="StartContainer for \"a50e1859c4f1fbb566a178f7025c43685caa916402266073f5367aa7a0ab0734\" returns successfully" Feb 9 09:44:24.640274 kubelet[1406]: E0209 09:44:24.640230 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:24.846682 kubelet[1406]: I0209 09:44:24.846571 1406 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=-9.223372031008236e+09 pod.CreationTimestamp="2024-02-09 09:44:19 +0000 UTC" firstStartedPulling="2024-02-09 09:44:19.724737152 +0000 UTC m=+42.982313948" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:44:24.845312768 +0000 UTC m=+48.102889564" watchObservedRunningTime="2024-02-09 09:44:24.846540489 +0000 UTC m=+48.104117325" Feb 9 09:44:25.488368 systemd[1]: run-containerd-runc-k8s.io-a50e1859c4f1fbb566a178f7025c43685caa916402266073f5367aa7a0ab0734-runc.LYLWSU.mount: Deactivated successfully. Feb 9 09:44:25.640856 kubelet[1406]: E0209 09:44:25.640811 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:26.641397 kubelet[1406]: E0209 09:44:26.641351 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:27.641680 kubelet[1406]: E0209 09:44:27.641634 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:28.642166 kubelet[1406]: E0209 09:44:28.642117 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:29.642877 kubelet[1406]: E0209 09:44:29.642822 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:30.643467 kubelet[1406]: E0209 09:44:30.643407 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:31.644366 kubelet[1406]: E0209 09:44:31.644327 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:32.645397 kubelet[1406]: E0209 09:44:32.645356 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:33.646355 kubelet[1406]: E0209 09:44:33.646305 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:34.450338 kubelet[1406]: I0209 09:44:34.450302 1406 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:44:34.454879 systemd[1]: Created slice kubepods-besteffort-podcc6c83b3_d963_49c1_baec_b3e9d28061fd.slice. Feb 9 09:44:34.620117 kubelet[1406]: I0209 09:44:34.620083 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v5pt\" (UniqueName: \"kubernetes.io/projected/cc6c83b3-d963-49c1-baec-b3e9d28061fd-kube-api-access-2v5pt\") pod \"test-pod-1\" (UID: \"cc6c83b3-d963-49c1-baec-b3e9d28061fd\") " pod="default/test-pod-1" Feb 9 09:44:34.620369 kubelet[1406]: I0209 09:44:34.620355 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3b333f98-a373-4b7b-ab40-173b6cd063dc\" (UniqueName: \"kubernetes.io/nfs/cc6c83b3-d963-49c1-baec-b3e9d28061fd-pvc-3b333f98-a373-4b7b-ab40-173b6cd063dc\") pod \"test-pod-1\" (UID: \"cc6c83b3-d963-49c1-baec-b3e9d28061fd\") " pod="default/test-pod-1" Feb 9 09:44:34.646990 kubelet[1406]: E0209 09:44:34.646956 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:34.742308 kernel: FS-Cache: Loaded Feb 9 09:44:34.765405 kernel: RPC: Registered named UNIX socket transport module. Feb 9 09:44:34.765516 kernel: RPC: Registered udp transport module. Feb 9 09:44:34.765547 kernel: RPC: Registered tcp transport module. Feb 9 09:44:34.766294 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 9 09:44:34.794210 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 9 09:44:34.923214 kernel: NFS: Registering the id_resolver key type Feb 9 09:44:34.923380 kernel: Key type id_resolver registered Feb 9 09:44:34.923404 kernel: Key type id_legacy registered Feb 9 09:44:34.943869 nfsidmap[2817]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 9 09:44:34.946676 nfsidmap[2820]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 9 09:44:35.058768 env[1140]: time="2024-02-09T09:44:35.058663473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:cc6c83b3-d963-49c1-baec-b3e9d28061fd,Namespace:default,Attempt:0,}" Feb 9 09:44:35.086300 systemd-networkd[1055]: lxc24f7ded34bed: Link UP Feb 9 09:44:35.094210 kernel: eth0: renamed from tmpeb64f Feb 9 09:44:35.100663 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:44:35.100750 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc24f7ded34bed: link becomes ready Feb 9 09:44:35.100889 systemd-networkd[1055]: lxc24f7ded34bed: Gained carrier Feb 9 09:44:35.287652 env[1140]: time="2024-02-09T09:44:35.287582995Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:44:35.287652 env[1140]: time="2024-02-09T09:44:35.287621795Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:44:35.287652 env[1140]: time="2024-02-09T09:44:35.287632195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:44:35.287884 env[1140]: time="2024-02-09T09:44:35.287778795Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/eb64f217689849214671790980bbdab04ce279cae4fb3316851788ece08cb506 pid=2855 runtime=io.containerd.runc.v2 Feb 9 09:44:35.297618 systemd[1]: Started cri-containerd-eb64f217689849214671790980bbdab04ce279cae4fb3316851788ece08cb506.scope. Feb 9 09:44:35.320865 systemd-resolved[1087]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 09:44:35.339273 env[1140]: time="2024-02-09T09:44:35.339233214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:cc6c83b3-d963-49c1-baec-b3e9d28061fd,Namespace:default,Attempt:0,} returns sandbox id \"eb64f217689849214671790980bbdab04ce279cae4fb3316851788ece08cb506\"" Feb 9 09:44:35.340525 env[1140]: time="2024-02-09T09:44:35.340495574Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 09:44:35.648120 kubelet[1406]: E0209 09:44:35.647999 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:35.673447 env[1140]: time="2024-02-09T09:44:35.673401854Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:44:35.675168 env[1140]: time="2024-02-09T09:44:35.675133334Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:44:35.676700 env[1140]: time="2024-02-09T09:44:35.676672535Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:44:35.678821 env[1140]: time="2024-02-09T09:44:35.678792056Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:44:35.679471 env[1140]: time="2024-02-09T09:44:35.679443176Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\"" Feb 9 09:44:35.681178 env[1140]: time="2024-02-09T09:44:35.681148096Z" level=info msg="CreateContainer within sandbox \"eb64f217689849214671790980bbdab04ce279cae4fb3316851788ece08cb506\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 9 09:44:35.692728 env[1140]: time="2024-02-09T09:44:35.692690301Z" level=info msg="CreateContainer within sandbox \"eb64f217689849214671790980bbdab04ce279cae4fb3316851788ece08cb506\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"beb236cfad7c11a0445bc55ae1f5ce2fe84ba0ad9d890b1442a93dcbb79945e7\"" Feb 9 09:44:35.693254 env[1140]: time="2024-02-09T09:44:35.693202741Z" level=info msg="StartContainer for \"beb236cfad7c11a0445bc55ae1f5ce2fe84ba0ad9d890b1442a93dcbb79945e7\"" Feb 9 09:44:35.706839 systemd[1]: Started cri-containerd-beb236cfad7c11a0445bc55ae1f5ce2fe84ba0ad9d890b1442a93dcbb79945e7.scope. Feb 9 09:44:35.739625 env[1140]: time="2024-02-09T09:44:35.739581277Z" level=info msg="StartContainer for \"beb236cfad7c11a0445bc55ae1f5ce2fe84ba0ad9d890b1442a93dcbb79945e7\" returns successfully" Feb 9 09:44:35.862953 kubelet[1406]: I0209 09:44:35.862850 1406 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=-9.22337201999196e+09 pod.CreationTimestamp="2024-02-09 09:44:19 +0000 UTC" firstStartedPulling="2024-02-09 09:44:35.340164694 +0000 UTC m=+58.597741490" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:44:35.862041481 +0000 UTC m=+59.119618317" watchObservedRunningTime="2024-02-09 09:44:35.862815682 +0000 UTC m=+59.120392518" Feb 9 09:44:36.649110 kubelet[1406]: E0209 09:44:36.649054 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:36.987371 systemd-networkd[1055]: lxc24f7ded34bed: Gained IPv6LL Feb 9 09:44:37.605536 kubelet[1406]: E0209 09:44:37.605504 1406 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:37.649897 kubelet[1406]: E0209 09:44:37.649863 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:38.651954 kubelet[1406]: E0209 09:44:38.650491 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:39.651331 kubelet[1406]: E0209 09:44:39.651294 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:40.652110 kubelet[1406]: E0209 09:44:40.652038 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:41.653138 kubelet[1406]: E0209 09:44:41.653029 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:42.215202 env[1140]: time="2024-02-09T09:44:42.215133752Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 09:44:42.220898 env[1140]: time="2024-02-09T09:44:42.220863393Z" level=info msg="StopContainer for \"bcb16267d7a630ff8860889bba06834111fce4b82e9b56f54ed1bae4492b4be9\" with timeout 1 (s)" Feb 9 09:44:42.221357 env[1140]: time="2024-02-09T09:44:42.221329993Z" level=info msg="Stop container \"bcb16267d7a630ff8860889bba06834111fce4b82e9b56f54ed1bae4492b4be9\" with signal terminated" Feb 9 09:44:42.229034 systemd-networkd[1055]: lxc_health: Link DOWN Feb 9 09:44:42.229048 systemd-networkd[1055]: lxc_health: Lost carrier Feb 9 09:44:42.267281 systemd[1]: cri-containerd-bcb16267d7a630ff8860889bba06834111fce4b82e9b56f54ed1bae4492b4be9.scope: Deactivated successfully. Feb 9 09:44:42.267620 systemd[1]: cri-containerd-bcb16267d7a630ff8860889bba06834111fce4b82e9b56f54ed1bae4492b4be9.scope: Consumed 6.536s CPU time. Feb 9 09:44:42.284035 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bcb16267d7a630ff8860889bba06834111fce4b82e9b56f54ed1bae4492b4be9-rootfs.mount: Deactivated successfully. Feb 9 09:44:42.291139 env[1140]: time="2024-02-09T09:44:42.291067409Z" level=info msg="shim disconnected" id=bcb16267d7a630ff8860889bba06834111fce4b82e9b56f54ed1bae4492b4be9 Feb 9 09:44:42.291139 env[1140]: time="2024-02-09T09:44:42.291114129Z" level=warning msg="cleaning up after shim disconnected" id=bcb16267d7a630ff8860889bba06834111fce4b82e9b56f54ed1bae4492b4be9 namespace=k8s.io Feb 9 09:44:42.291139 env[1140]: time="2024-02-09T09:44:42.291125529Z" level=info msg="cleaning up dead shim" Feb 9 09:44:42.298241 env[1140]: time="2024-02-09T09:44:42.298204211Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:44:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2988 runtime=io.containerd.runc.v2\n" Feb 9 09:44:42.300733 env[1140]: time="2024-02-09T09:44:42.300685691Z" level=info msg="StopContainer for \"bcb16267d7a630ff8860889bba06834111fce4b82e9b56f54ed1bae4492b4be9\" returns successfully" Feb 9 09:44:42.301286 env[1140]: time="2024-02-09T09:44:42.301257491Z" level=info msg="StopPodSandbox for \"c20b4c2e87241269206585f59d7f5cff6da60decd524bee5a2d10acf0b625ae5\"" Feb 9 09:44:42.301453 env[1140]: time="2024-02-09T09:44:42.301427131Z" level=info msg="Container to stop \"1259c776f1c174aa305a94af0da7fdefc428a61e126634d90d541e11a56d4ea6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:44:42.301525 env[1140]: time="2024-02-09T09:44:42.301508251Z" level=info msg="Container to stop \"954df0b20cb20984070ca7067698245d301fcd0963811da02a1ef95a8aefcf55\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:44:42.301586 env[1140]: time="2024-02-09T09:44:42.301569651Z" level=info msg="Container to stop \"b2d3f9594eeb4685ab701341cc131f95ae69e7ccc26c7c01be9bb31ed4a2bbc8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:44:42.301646 env[1140]: time="2024-02-09T09:44:42.301630051Z" level=info msg="Container to stop \"bcb16267d7a630ff8860889bba06834111fce4b82e9b56f54ed1bae4492b4be9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:44:42.301706 env[1140]: time="2024-02-09T09:44:42.301691371Z" level=info msg="Container to stop \"64eee29ceb23a66122f663f0ed455cce0d95abddc6b18774c20addc67301c7ce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:44:42.303086 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c20b4c2e87241269206585f59d7f5cff6da60decd524bee5a2d10acf0b625ae5-shm.mount: Deactivated successfully. Feb 9 09:44:42.310425 systemd[1]: cri-containerd-c20b4c2e87241269206585f59d7f5cff6da60decd524bee5a2d10acf0b625ae5.scope: Deactivated successfully. Feb 9 09:44:42.330555 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c20b4c2e87241269206585f59d7f5cff6da60decd524bee5a2d10acf0b625ae5-rootfs.mount: Deactivated successfully. Feb 9 09:44:42.335961 env[1140]: time="2024-02-09T09:44:42.335917779Z" level=info msg="shim disconnected" id=c20b4c2e87241269206585f59d7f5cff6da60decd524bee5a2d10acf0b625ae5 Feb 9 09:44:42.335961 env[1140]: time="2024-02-09T09:44:42.335962379Z" level=warning msg="cleaning up after shim disconnected" id=c20b4c2e87241269206585f59d7f5cff6da60decd524bee5a2d10acf0b625ae5 namespace=k8s.io Feb 9 09:44:42.336116 env[1140]: time="2024-02-09T09:44:42.335972099Z" level=info msg="cleaning up dead shim" Feb 9 09:44:42.343066 env[1140]: time="2024-02-09T09:44:42.343017061Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:44:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3019 runtime=io.containerd.runc.v2\n" Feb 9 09:44:42.343351 env[1140]: time="2024-02-09T09:44:42.343324101Z" level=info msg="TearDown network for sandbox \"c20b4c2e87241269206585f59d7f5cff6da60decd524bee5a2d10acf0b625ae5\" successfully" Feb 9 09:44:42.343351 env[1140]: time="2024-02-09T09:44:42.343348141Z" level=info msg="StopPodSandbox for \"c20b4c2e87241269206585f59d7f5cff6da60decd524bee5a2d10acf0b625ae5\" returns successfully" Feb 9 09:44:42.470052 kubelet[1406]: I0209 09:44:42.468938 1406 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/086156e2-659d-400f-9b24-ebbcd4ae9dd5-etc-cni-netd\") pod \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\" (UID: \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\") " Feb 9 09:44:42.470052 kubelet[1406]: I0209 09:44:42.468988 1406 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/086156e2-659d-400f-9b24-ebbcd4ae9dd5-clustermesh-secrets\") pod \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\" (UID: \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\") " Feb 9 09:44:42.470052 kubelet[1406]: I0209 09:44:42.469009 1406 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/086156e2-659d-400f-9b24-ebbcd4ae9dd5-hubble-tls\") pod \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\" (UID: \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\") " Feb 9 09:44:42.470052 kubelet[1406]: I0209 09:44:42.469027 1406 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/086156e2-659d-400f-9b24-ebbcd4ae9dd5-xtables-lock\") pod \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\" (UID: \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\") " Feb 9 09:44:42.470052 kubelet[1406]: I0209 09:44:42.469051 1406 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/086156e2-659d-400f-9b24-ebbcd4ae9dd5-cilium-run\") pod \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\" (UID: \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\") " Feb 9 09:44:42.470052 kubelet[1406]: I0209 09:44:42.469073 1406 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/086156e2-659d-400f-9b24-ebbcd4ae9dd5-bpf-maps\") pod \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\" (UID: \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\") " Feb 9 09:44:42.470326 kubelet[1406]: I0209 09:44:42.469094 1406 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/086156e2-659d-400f-9b24-ebbcd4ae9dd5-host-proc-sys-kernel\") pod \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\" (UID: \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\") " Feb 9 09:44:42.470326 kubelet[1406]: I0209 09:44:42.469111 1406 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/086156e2-659d-400f-9b24-ebbcd4ae9dd5-cilium-cgroup\") pod \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\" (UID: \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\") " Feb 9 09:44:42.470326 kubelet[1406]: I0209 09:44:42.469128 1406 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/086156e2-659d-400f-9b24-ebbcd4ae9dd5-cni-path\") pod \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\" (UID: \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\") " Feb 9 09:44:42.470326 kubelet[1406]: I0209 09:44:42.469118 1406 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/086156e2-659d-400f-9b24-ebbcd4ae9dd5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "086156e2-659d-400f-9b24-ebbcd4ae9dd5" (UID: "086156e2-659d-400f-9b24-ebbcd4ae9dd5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:44:42.470326 kubelet[1406]: I0209 09:44:42.469148 1406 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/086156e2-659d-400f-9b24-ebbcd4ae9dd5-hostproc\") pod \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\" (UID: \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\") " Feb 9 09:44:42.470326 kubelet[1406]: I0209 09:44:42.469170 1406 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6skt\" (UniqueName: \"kubernetes.io/projected/086156e2-659d-400f-9b24-ebbcd4ae9dd5-kube-api-access-v6skt\") pod \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\" (UID: \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\") " Feb 9 09:44:42.470559 kubelet[1406]: I0209 09:44:42.469218 1406 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/086156e2-659d-400f-9b24-ebbcd4ae9dd5-cilium-config-path\") pod \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\" (UID: \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\") " Feb 9 09:44:42.470559 kubelet[1406]: I0209 09:44:42.469239 1406 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/086156e2-659d-400f-9b24-ebbcd4ae9dd5-host-proc-sys-net\") pod \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\" (UID: \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\") " Feb 9 09:44:42.470559 kubelet[1406]: I0209 09:44:42.469257 1406 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/086156e2-659d-400f-9b24-ebbcd4ae9dd5-lib-modules\") pod \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\" (UID: \"086156e2-659d-400f-9b24-ebbcd4ae9dd5\") " Feb 9 09:44:42.470559 kubelet[1406]: I0209 09:44:42.469282 1406 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/086156e2-659d-400f-9b24-ebbcd4ae9dd5-xtables-lock\") on node \"10.0.0.14\" DevicePath \"\"" Feb 9 09:44:42.470559 kubelet[1406]: I0209 09:44:42.469306 1406 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/086156e2-659d-400f-9b24-ebbcd4ae9dd5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "086156e2-659d-400f-9b24-ebbcd4ae9dd5" (UID: "086156e2-659d-400f-9b24-ebbcd4ae9dd5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:44:42.470559 kubelet[1406]: I0209 09:44:42.469327 1406 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/086156e2-659d-400f-9b24-ebbcd4ae9dd5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "086156e2-659d-400f-9b24-ebbcd4ae9dd5" (UID: "086156e2-659d-400f-9b24-ebbcd4ae9dd5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:44:42.470691 kubelet[1406]: I0209 09:44:42.469344 1406 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/086156e2-659d-400f-9b24-ebbcd4ae9dd5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "086156e2-659d-400f-9b24-ebbcd4ae9dd5" (UID: "086156e2-659d-400f-9b24-ebbcd4ae9dd5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:44:42.470691 kubelet[1406]: I0209 09:44:42.469357 1406 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/086156e2-659d-400f-9b24-ebbcd4ae9dd5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "086156e2-659d-400f-9b24-ebbcd4ae9dd5" (UID: "086156e2-659d-400f-9b24-ebbcd4ae9dd5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:44:42.470691 kubelet[1406]: I0209 09:44:42.469374 1406 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/086156e2-659d-400f-9b24-ebbcd4ae9dd5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "086156e2-659d-400f-9b24-ebbcd4ae9dd5" (UID: "086156e2-659d-400f-9b24-ebbcd4ae9dd5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:44:42.470691 kubelet[1406]: I0209 09:44:42.469560 1406 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/086156e2-659d-400f-9b24-ebbcd4ae9dd5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "086156e2-659d-400f-9b24-ebbcd4ae9dd5" (UID: "086156e2-659d-400f-9b24-ebbcd4ae9dd5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:44:42.470691 kubelet[1406]: I0209 09:44:42.469594 1406 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/086156e2-659d-400f-9b24-ebbcd4ae9dd5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "086156e2-659d-400f-9b24-ebbcd4ae9dd5" (UID: "086156e2-659d-400f-9b24-ebbcd4ae9dd5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:44:42.470817 kubelet[1406]: I0209 09:44:42.469629 1406 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/086156e2-659d-400f-9b24-ebbcd4ae9dd5-cni-path" (OuterVolumeSpecName: "cni-path") pod "086156e2-659d-400f-9b24-ebbcd4ae9dd5" (UID: "086156e2-659d-400f-9b24-ebbcd4ae9dd5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:44:42.470817 kubelet[1406]: I0209 09:44:42.469648 1406 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/086156e2-659d-400f-9b24-ebbcd4ae9dd5-hostproc" (OuterVolumeSpecName: "hostproc") pod "086156e2-659d-400f-9b24-ebbcd4ae9dd5" (UID: "086156e2-659d-400f-9b24-ebbcd4ae9dd5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:44:42.470817 kubelet[1406]: W0209 09:44:42.469751 1406 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/086156e2-659d-400f-9b24-ebbcd4ae9dd5/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 09:44:42.471924 kubelet[1406]: I0209 09:44:42.471846 1406 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/086156e2-659d-400f-9b24-ebbcd4ae9dd5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "086156e2-659d-400f-9b24-ebbcd4ae9dd5" (UID: "086156e2-659d-400f-9b24-ebbcd4ae9dd5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 09:44:42.472018 kubelet[1406]: I0209 09:44:42.471940 1406 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/086156e2-659d-400f-9b24-ebbcd4ae9dd5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "086156e2-659d-400f-9b24-ebbcd4ae9dd5" (UID: "086156e2-659d-400f-9b24-ebbcd4ae9dd5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:44:42.472546 kubelet[1406]: I0209 09:44:42.472510 1406 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/086156e2-659d-400f-9b24-ebbcd4ae9dd5-kube-api-access-v6skt" (OuterVolumeSpecName: "kube-api-access-v6skt") pod "086156e2-659d-400f-9b24-ebbcd4ae9dd5" (UID: "086156e2-659d-400f-9b24-ebbcd4ae9dd5"). InnerVolumeSpecName "kube-api-access-v6skt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:44:42.473895 kubelet[1406]: I0209 09:44:42.473866 1406 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/086156e2-659d-400f-9b24-ebbcd4ae9dd5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "086156e2-659d-400f-9b24-ebbcd4ae9dd5" (UID: "086156e2-659d-400f-9b24-ebbcd4ae9dd5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:44:42.570296 kubelet[1406]: I0209 09:44:42.570260 1406 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/086156e2-659d-400f-9b24-ebbcd4ae9dd5-etc-cni-netd\") on node \"10.0.0.14\" DevicePath \"\"" Feb 9 09:44:42.570296 kubelet[1406]: I0209 09:44:42.570293 1406 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/086156e2-659d-400f-9b24-ebbcd4ae9dd5-clustermesh-secrets\") on node \"10.0.0.14\" DevicePath \"\"" Feb 9 09:44:42.570431 kubelet[1406]: I0209 09:44:42.570306 1406 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/086156e2-659d-400f-9b24-ebbcd4ae9dd5-hubble-tls\") on node \"10.0.0.14\" DevicePath \"\"" Feb 9 09:44:42.570431 kubelet[1406]: I0209 09:44:42.570317 1406 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/086156e2-659d-400f-9b24-ebbcd4ae9dd5-host-proc-sys-kernel\") on node \"10.0.0.14\" DevicePath \"\"" Feb 9 09:44:42.570431 kubelet[1406]: I0209 09:44:42.570325 1406 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/086156e2-659d-400f-9b24-ebbcd4ae9dd5-cilium-run\") on node \"10.0.0.14\" DevicePath \"\"" Feb 9 09:44:42.570431 kubelet[1406]: I0209 09:44:42.570334 1406 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/086156e2-659d-400f-9b24-ebbcd4ae9dd5-bpf-maps\") on node \"10.0.0.14\" DevicePath \"\"" Feb 9 09:44:42.570431 kubelet[1406]: I0209 09:44:42.570342 1406 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/086156e2-659d-400f-9b24-ebbcd4ae9dd5-hostproc\") on node \"10.0.0.14\" DevicePath \"\"" Feb 9 09:44:42.570431 kubelet[1406]: I0209 09:44:42.570351 1406 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/086156e2-659d-400f-9b24-ebbcd4ae9dd5-cilium-cgroup\") on node \"10.0.0.14\" DevicePath \"\"" Feb 9 09:44:42.570431 kubelet[1406]: I0209 09:44:42.570360 1406 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/086156e2-659d-400f-9b24-ebbcd4ae9dd5-cni-path\") on node \"10.0.0.14\" DevicePath \"\"" Feb 9 09:44:42.570431 kubelet[1406]: I0209 09:44:42.570369 1406 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/086156e2-659d-400f-9b24-ebbcd4ae9dd5-lib-modules\") on node \"10.0.0.14\" DevicePath \"\"" Feb 9 09:44:42.570601 kubelet[1406]: I0209 09:44:42.570378 1406 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-v6skt\" (UniqueName: \"kubernetes.io/projected/086156e2-659d-400f-9b24-ebbcd4ae9dd5-kube-api-access-v6skt\") on node \"10.0.0.14\" DevicePath \"\"" Feb 9 09:44:42.570601 kubelet[1406]: I0209 09:44:42.570387 1406 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/086156e2-659d-400f-9b24-ebbcd4ae9dd5-cilium-config-path\") on node \"10.0.0.14\" DevicePath \"\"" Feb 9 09:44:42.570601 kubelet[1406]: I0209 09:44:42.570396 1406 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/086156e2-659d-400f-9b24-ebbcd4ae9dd5-host-proc-sys-net\") on node \"10.0.0.14\" DevicePath \"\"" Feb 9 09:44:42.653888 kubelet[1406]: E0209 09:44:42.653840 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:42.670434 kubelet[1406]: E0209 09:44:42.670408 1406 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 09:44:42.868159 kubelet[1406]: I0209 09:44:42.868079 1406 scope.go:115] "RemoveContainer" containerID="bcb16267d7a630ff8860889bba06834111fce4b82e9b56f54ed1bae4492b4be9" Feb 9 09:44:42.870784 env[1140]: time="2024-02-09T09:44:42.870547261Z" level=info msg="RemoveContainer for \"bcb16267d7a630ff8860889bba06834111fce4b82e9b56f54ed1bae4492b4be9\"" Feb 9 09:44:42.871771 systemd[1]: Removed slice kubepods-burstable-pod086156e2_659d_400f_9b24_ebbcd4ae9dd5.slice. Feb 9 09:44:42.871865 systemd[1]: kubepods-burstable-pod086156e2_659d_400f_9b24_ebbcd4ae9dd5.slice: Consumed 6.713s CPU time. Feb 9 09:44:42.878332 env[1140]: time="2024-02-09T09:44:42.878297903Z" level=info msg="RemoveContainer for \"bcb16267d7a630ff8860889bba06834111fce4b82e9b56f54ed1bae4492b4be9\" returns successfully" Feb 9 09:44:42.878607 kubelet[1406]: I0209 09:44:42.878589 1406 scope.go:115] "RemoveContainer" containerID="b2d3f9594eeb4685ab701341cc131f95ae69e7ccc26c7c01be9bb31ed4a2bbc8" Feb 9 09:44:42.879620 env[1140]: time="2024-02-09T09:44:42.879594463Z" level=info msg="RemoveContainer for \"b2d3f9594eeb4685ab701341cc131f95ae69e7ccc26c7c01be9bb31ed4a2bbc8\"" Feb 9 09:44:42.881620 env[1140]: time="2024-02-09T09:44:42.881593264Z" level=info msg="RemoveContainer for \"b2d3f9594eeb4685ab701341cc131f95ae69e7ccc26c7c01be9bb31ed4a2bbc8\" returns successfully" Feb 9 09:44:42.881748 kubelet[1406]: I0209 09:44:42.881731 1406 scope.go:115] "RemoveContainer" containerID="64eee29ceb23a66122f663f0ed455cce0d95abddc6b18774c20addc67301c7ce" Feb 9 09:44:42.882573 env[1140]: time="2024-02-09T09:44:42.882517984Z" level=info msg="RemoveContainer for \"64eee29ceb23a66122f663f0ed455cce0d95abddc6b18774c20addc67301c7ce\"" Feb 9 09:44:42.884675 env[1140]: time="2024-02-09T09:44:42.884644985Z" level=info msg="RemoveContainer for \"64eee29ceb23a66122f663f0ed455cce0d95abddc6b18774c20addc67301c7ce\" returns successfully" Feb 9 09:44:42.884831 kubelet[1406]: I0209 09:44:42.884812 1406 scope.go:115] "RemoveContainer" containerID="954df0b20cb20984070ca7067698245d301fcd0963811da02a1ef95a8aefcf55" Feb 9 09:44:42.885798 env[1140]: time="2024-02-09T09:44:42.885755865Z" level=info msg="RemoveContainer for \"954df0b20cb20984070ca7067698245d301fcd0963811da02a1ef95a8aefcf55\"" Feb 9 09:44:42.888378 env[1140]: time="2024-02-09T09:44:42.888346745Z" level=info msg="RemoveContainer for \"954df0b20cb20984070ca7067698245d301fcd0963811da02a1ef95a8aefcf55\" returns successfully" Feb 9 09:44:42.888549 kubelet[1406]: I0209 09:44:42.888531 1406 scope.go:115] "RemoveContainer" containerID="1259c776f1c174aa305a94af0da7fdefc428a61e126634d90d541e11a56d4ea6" Feb 9 09:44:42.889677 env[1140]: time="2024-02-09T09:44:42.889606746Z" level=info msg="RemoveContainer for \"1259c776f1c174aa305a94af0da7fdefc428a61e126634d90d541e11a56d4ea6\"" Feb 9 09:44:42.891475 env[1140]: time="2024-02-09T09:44:42.891446426Z" level=info msg="RemoveContainer for \"1259c776f1c174aa305a94af0da7fdefc428a61e126634d90d541e11a56d4ea6\" returns successfully" Feb 9 09:44:42.891599 kubelet[1406]: I0209 09:44:42.891581 1406 scope.go:115] "RemoveContainer" containerID="bcb16267d7a630ff8860889bba06834111fce4b82e9b56f54ed1bae4492b4be9" Feb 9 09:44:42.891822 env[1140]: time="2024-02-09T09:44:42.891721426Z" level=error msg="ContainerStatus for \"bcb16267d7a630ff8860889bba06834111fce4b82e9b56f54ed1bae4492b4be9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bcb16267d7a630ff8860889bba06834111fce4b82e9b56f54ed1bae4492b4be9\": not found" Feb 9 09:44:42.891994 kubelet[1406]: E0209 09:44:42.891978 1406 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bcb16267d7a630ff8860889bba06834111fce4b82e9b56f54ed1bae4492b4be9\": not found" containerID="bcb16267d7a630ff8860889bba06834111fce4b82e9b56f54ed1bae4492b4be9" Feb 9 09:44:42.892101 kubelet[1406]: I0209 09:44:42.892087 1406 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:bcb16267d7a630ff8860889bba06834111fce4b82e9b56f54ed1bae4492b4be9} err="failed to get container status \"bcb16267d7a630ff8860889bba06834111fce4b82e9b56f54ed1bae4492b4be9\": rpc error: code = NotFound desc = an error occurred when try to find container \"bcb16267d7a630ff8860889bba06834111fce4b82e9b56f54ed1bae4492b4be9\": not found" Feb 9 09:44:42.892164 kubelet[1406]: I0209 09:44:42.892154 1406 scope.go:115] "RemoveContainer" containerID="b2d3f9594eeb4685ab701341cc131f95ae69e7ccc26c7c01be9bb31ed4a2bbc8" Feb 9 09:44:42.892409 env[1140]: time="2024-02-09T09:44:42.892358106Z" level=error msg="ContainerStatus for \"b2d3f9594eeb4685ab701341cc131f95ae69e7ccc26c7c01be9bb31ed4a2bbc8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b2d3f9594eeb4685ab701341cc131f95ae69e7ccc26c7c01be9bb31ed4a2bbc8\": not found" Feb 9 09:44:42.892518 kubelet[1406]: E0209 09:44:42.892499 1406 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b2d3f9594eeb4685ab701341cc131f95ae69e7ccc26c7c01be9bb31ed4a2bbc8\": not found" containerID="b2d3f9594eeb4685ab701341cc131f95ae69e7ccc26c7c01be9bb31ed4a2bbc8" Feb 9 09:44:42.892553 kubelet[1406]: I0209 09:44:42.892533 1406 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:b2d3f9594eeb4685ab701341cc131f95ae69e7ccc26c7c01be9bb31ed4a2bbc8} err="failed to get container status \"b2d3f9594eeb4685ab701341cc131f95ae69e7ccc26c7c01be9bb31ed4a2bbc8\": rpc error: code = NotFound desc = an error occurred when try to find container \"b2d3f9594eeb4685ab701341cc131f95ae69e7ccc26c7c01be9bb31ed4a2bbc8\": not found" Feb 9 09:44:42.892553 kubelet[1406]: I0209 09:44:42.892545 1406 scope.go:115] "RemoveContainer" containerID="64eee29ceb23a66122f663f0ed455cce0d95abddc6b18774c20addc67301c7ce" Feb 9 09:44:42.892692 env[1140]: time="2024-02-09T09:44:42.892654746Z" level=error msg="ContainerStatus for \"64eee29ceb23a66122f663f0ed455cce0d95abddc6b18774c20addc67301c7ce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"64eee29ceb23a66122f663f0ed455cce0d95abddc6b18774c20addc67301c7ce\": not found" Feb 9 09:44:42.892764 kubelet[1406]: E0209 09:44:42.892751 1406 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"64eee29ceb23a66122f663f0ed455cce0d95abddc6b18774c20addc67301c7ce\": not found" containerID="64eee29ceb23a66122f663f0ed455cce0d95abddc6b18774c20addc67301c7ce" Feb 9 09:44:42.892800 kubelet[1406]: I0209 09:44:42.892773 1406 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:64eee29ceb23a66122f663f0ed455cce0d95abddc6b18774c20addc67301c7ce} err="failed to get container status \"64eee29ceb23a66122f663f0ed455cce0d95abddc6b18774c20addc67301c7ce\": rpc error: code = NotFound desc = an error occurred when try to find container \"64eee29ceb23a66122f663f0ed455cce0d95abddc6b18774c20addc67301c7ce\": not found" Feb 9 09:44:42.892826 kubelet[1406]: I0209 09:44:42.892812 1406 scope.go:115] "RemoveContainer" containerID="954df0b20cb20984070ca7067698245d301fcd0963811da02a1ef95a8aefcf55" Feb 9 09:44:42.892965 env[1140]: time="2024-02-09T09:44:42.892931987Z" level=error msg="ContainerStatus for \"954df0b20cb20984070ca7067698245d301fcd0963811da02a1ef95a8aefcf55\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"954df0b20cb20984070ca7067698245d301fcd0963811da02a1ef95a8aefcf55\": not found" Feb 9 09:44:42.893074 kubelet[1406]: E0209 09:44:42.893062 1406 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"954df0b20cb20984070ca7067698245d301fcd0963811da02a1ef95a8aefcf55\": not found" containerID="954df0b20cb20984070ca7067698245d301fcd0963811da02a1ef95a8aefcf55" Feb 9 09:44:42.893118 kubelet[1406]: I0209 09:44:42.893081 1406 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:954df0b20cb20984070ca7067698245d301fcd0963811da02a1ef95a8aefcf55} err="failed to get container status \"954df0b20cb20984070ca7067698245d301fcd0963811da02a1ef95a8aefcf55\": rpc error: code = NotFound desc = an error occurred when try to find container \"954df0b20cb20984070ca7067698245d301fcd0963811da02a1ef95a8aefcf55\": not found" Feb 9 09:44:42.893118 kubelet[1406]: I0209 09:44:42.893089 1406 scope.go:115] "RemoveContainer" containerID="1259c776f1c174aa305a94af0da7fdefc428a61e126634d90d541e11a56d4ea6" Feb 9 09:44:42.893239 env[1140]: time="2024-02-09T09:44:42.893182067Z" level=error msg="ContainerStatus for \"1259c776f1c174aa305a94af0da7fdefc428a61e126634d90d541e11a56d4ea6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1259c776f1c174aa305a94af0da7fdefc428a61e126634d90d541e11a56d4ea6\": not found" Feb 9 09:44:42.893356 kubelet[1406]: E0209 09:44:42.893340 1406 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1259c776f1c174aa305a94af0da7fdefc428a61e126634d90d541e11a56d4ea6\": not found" containerID="1259c776f1c174aa305a94af0da7fdefc428a61e126634d90d541e11a56d4ea6" Feb 9 09:44:42.893437 kubelet[1406]: I0209 09:44:42.893426 1406 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:1259c776f1c174aa305a94af0da7fdefc428a61e126634d90d541e11a56d4ea6} err="failed to get container status \"1259c776f1c174aa305a94af0da7fdefc428a61e126634d90d541e11a56d4ea6\": rpc error: code = NotFound desc = an error occurred when try to find container \"1259c776f1c174aa305a94af0da7fdefc428a61e126634d90d541e11a56d4ea6\": not found" Feb 9 09:44:43.143129 systemd[1]: var-lib-kubelet-pods-086156e2\x2d659d\x2d400f\x2d9b24\x2debbcd4ae9dd5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv6skt.mount: Deactivated successfully. Feb 9 09:44:43.143266 systemd[1]: var-lib-kubelet-pods-086156e2\x2d659d\x2d400f\x2d9b24\x2debbcd4ae9dd5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 09:44:43.143329 systemd[1]: var-lib-kubelet-pods-086156e2\x2d659d\x2d400f\x2d9b24\x2debbcd4ae9dd5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 09:44:43.654146 kubelet[1406]: E0209 09:44:43.654106 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:43.755149 kubelet[1406]: I0209 09:44:43.755123 1406 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=086156e2-659d-400f-9b24-ebbcd4ae9dd5 path="/var/lib/kubelet/pods/086156e2-659d-400f-9b24-ebbcd4ae9dd5/volumes" Feb 9 09:44:44.655311 kubelet[1406]: E0209 09:44:44.655281 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:45.315585 kubelet[1406]: I0209 09:44:45.315532 1406 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:44:45.315722 kubelet[1406]: E0209 09:44:45.315600 1406 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="086156e2-659d-400f-9b24-ebbcd4ae9dd5" containerName="clean-cilium-state" Feb 9 09:44:45.315722 kubelet[1406]: E0209 09:44:45.315610 1406 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="086156e2-659d-400f-9b24-ebbcd4ae9dd5" containerName="mount-cgroup" Feb 9 09:44:45.315722 kubelet[1406]: E0209 09:44:45.315618 1406 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="086156e2-659d-400f-9b24-ebbcd4ae9dd5" containerName="apply-sysctl-overwrites" Feb 9 09:44:45.315722 kubelet[1406]: E0209 09:44:45.315624 1406 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="086156e2-659d-400f-9b24-ebbcd4ae9dd5" containerName="mount-bpf-fs" Feb 9 09:44:45.315722 kubelet[1406]: E0209 09:44:45.315632 1406 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="086156e2-659d-400f-9b24-ebbcd4ae9dd5" containerName="cilium-agent" Feb 9 09:44:45.315722 kubelet[1406]: I0209 09:44:45.315650 1406 memory_manager.go:346] "RemoveStaleState removing state" podUID="086156e2-659d-400f-9b24-ebbcd4ae9dd5" containerName="cilium-agent" Feb 9 09:44:45.320625 systemd[1]: Created slice kubepods-besteffort-pod71c45dd8_5ea3_4959_8682_a038135a2ca9.slice. Feb 9 09:44:45.321773 kubelet[1406]: I0209 09:44:45.321740 1406 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:44:45.326729 systemd[1]: Created slice kubepods-burstable-pod971a50b7_acff_477a_add4_d081f08c36f4.slice. Feb 9 09:44:45.484111 kubelet[1406]: I0209 09:44:45.484052 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/971a50b7-acff-477a-add4-d081f08c36f4-cilium-ipsec-secrets\") pod \"cilium-xlgqn\" (UID: \"971a50b7-acff-477a-add4-d081f08c36f4\") " pod="kube-system/cilium-xlgqn" Feb 9 09:44:45.484254 kubelet[1406]: I0209 09:44:45.484179 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/971a50b7-acff-477a-add4-d081f08c36f4-host-proc-sys-net\") pod \"cilium-xlgqn\" (UID: \"971a50b7-acff-477a-add4-d081f08c36f4\") " pod="kube-system/cilium-xlgqn" Feb 9 09:44:45.484308 kubelet[1406]: I0209 09:44:45.484263 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/971a50b7-acff-477a-add4-d081f08c36f4-hubble-tls\") pod \"cilium-xlgqn\" (UID: \"971a50b7-acff-477a-add4-d081f08c36f4\") " pod="kube-system/cilium-xlgqn" Feb 9 09:44:45.484308 kubelet[1406]: I0209 09:44:45.484299 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7s2r\" (UniqueName: \"kubernetes.io/projected/971a50b7-acff-477a-add4-d081f08c36f4-kube-api-access-w7s2r\") pod \"cilium-xlgqn\" (UID: \"971a50b7-acff-477a-add4-d081f08c36f4\") " pod="kube-system/cilium-xlgqn" Feb 9 09:44:45.484368 kubelet[1406]: I0209 09:44:45.484322 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/971a50b7-acff-477a-add4-d081f08c36f4-cilium-run\") pod \"cilium-xlgqn\" (UID: \"971a50b7-acff-477a-add4-d081f08c36f4\") " pod="kube-system/cilium-xlgqn" Feb 9 09:44:45.484394 kubelet[1406]: I0209 09:44:45.484368 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/971a50b7-acff-477a-add4-d081f08c36f4-bpf-maps\") pod \"cilium-xlgqn\" (UID: \"971a50b7-acff-477a-add4-d081f08c36f4\") " pod="kube-system/cilium-xlgqn" Feb 9 09:44:45.484394 kubelet[1406]: I0209 09:44:45.484392 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/971a50b7-acff-477a-add4-d081f08c36f4-lib-modules\") pod \"cilium-xlgqn\" (UID: \"971a50b7-acff-477a-add4-d081f08c36f4\") " pod="kube-system/cilium-xlgqn" Feb 9 09:44:45.484439 kubelet[1406]: I0209 09:44:45.484416 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/971a50b7-acff-477a-add4-d081f08c36f4-xtables-lock\") pod \"cilium-xlgqn\" (UID: \"971a50b7-acff-477a-add4-d081f08c36f4\") " pod="kube-system/cilium-xlgqn" Feb 9 09:44:45.484463 kubelet[1406]: I0209 09:44:45.484447 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/971a50b7-acff-477a-add4-d081f08c36f4-clustermesh-secrets\") pod \"cilium-xlgqn\" (UID: \"971a50b7-acff-477a-add4-d081f08c36f4\") " pod="kube-system/cilium-xlgqn" Feb 9 09:44:45.484491 kubelet[1406]: I0209 09:44:45.484476 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8kbc\" (UniqueName: \"kubernetes.io/projected/71c45dd8-5ea3-4959-8682-a038135a2ca9-kube-api-access-n8kbc\") pod \"cilium-operator-f59cbd8c6-kj4kl\" (UID: \"71c45dd8-5ea3-4959-8682-a038135a2ca9\") " pod="kube-system/cilium-operator-f59cbd8c6-kj4kl" Feb 9 09:44:45.484515 kubelet[1406]: I0209 09:44:45.484498 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/971a50b7-acff-477a-add4-d081f08c36f4-hostproc\") pod \"cilium-xlgqn\" (UID: \"971a50b7-acff-477a-add4-d081f08c36f4\") " pod="kube-system/cilium-xlgqn" Feb 9 09:44:45.484537 kubelet[1406]: I0209 09:44:45.484525 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/971a50b7-acff-477a-add4-d081f08c36f4-cilium-cgroup\") pod \"cilium-xlgqn\" (UID: \"971a50b7-acff-477a-add4-d081f08c36f4\") " pod="kube-system/cilium-xlgqn" Feb 9 09:44:45.484561 kubelet[1406]: I0209 09:44:45.484553 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/971a50b7-acff-477a-add4-d081f08c36f4-cilium-config-path\") pod \"cilium-xlgqn\" (UID: \"971a50b7-acff-477a-add4-d081f08c36f4\") " pod="kube-system/cilium-xlgqn" Feb 9 09:44:45.484637 kubelet[1406]: I0209 09:44:45.484593 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/71c45dd8-5ea3-4959-8682-a038135a2ca9-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-kj4kl\" (UID: \"71c45dd8-5ea3-4959-8682-a038135a2ca9\") " pod="kube-system/cilium-operator-f59cbd8c6-kj4kl" Feb 9 09:44:45.484637 kubelet[1406]: I0209 09:44:45.484634 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/971a50b7-acff-477a-add4-d081f08c36f4-cni-path\") pod \"cilium-xlgqn\" (UID: \"971a50b7-acff-477a-add4-d081f08c36f4\") " pod="kube-system/cilium-xlgqn" Feb 9 09:44:45.484700 kubelet[1406]: I0209 09:44:45.484665 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/971a50b7-acff-477a-add4-d081f08c36f4-etc-cni-netd\") pod \"cilium-xlgqn\" (UID: \"971a50b7-acff-477a-add4-d081f08c36f4\") " pod="kube-system/cilium-xlgqn" Feb 9 09:44:45.484700 kubelet[1406]: I0209 09:44:45.484687 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/971a50b7-acff-477a-add4-d081f08c36f4-host-proc-sys-kernel\") pod \"cilium-xlgqn\" (UID: \"971a50b7-acff-477a-add4-d081f08c36f4\") " pod="kube-system/cilium-xlgqn" Feb 9 09:44:45.624522 kubelet[1406]: E0209 09:44:45.623481 1406 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:45.624667 env[1140]: time="2024-02-09T09:44:45.624135665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-kj4kl,Uid:71c45dd8-5ea3-4959-8682-a038135a2ca9,Namespace:kube-system,Attempt:0,}" Feb 9 09:44:45.638075 env[1140]: time="2024-02-09T09:44:45.637882148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:44:45.638075 env[1140]: time="2024-02-09T09:44:45.637921548Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:44:45.638075 env[1140]: time="2024-02-09T09:44:45.637931788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:44:45.638271 env[1140]: time="2024-02-09T09:44:45.638092708Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/faf973b75841cd4035c7088c6c1cbf63b2eaee59db87a66ad975fcfe9160c9b0 pid=3047 runtime=io.containerd.runc.v2 Feb 9 09:44:45.640233 kubelet[1406]: E0209 09:44:45.640181 1406 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:45.640661 env[1140]: time="2024-02-09T09:44:45.640604988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xlgqn,Uid:971a50b7-acff-477a-add4-d081f08c36f4,Namespace:kube-system,Attempt:0,}" Feb 9 09:44:45.650192 systemd[1]: Started cri-containerd-faf973b75841cd4035c7088c6c1cbf63b2eaee59db87a66ad975fcfe9160c9b0.scope. Feb 9 09:44:45.652550 env[1140]: time="2024-02-09T09:44:45.652484511Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:44:45.652687 env[1140]: time="2024-02-09T09:44:45.652537831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:44:45.652783 env[1140]: time="2024-02-09T09:44:45.652680191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:44:45.653046 env[1140]: time="2024-02-09T09:44:45.653002391Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/302b4487369405946594cdaa7506f622279bd4acb46eae31f3056901a1e219ab pid=3074 runtime=io.containerd.runc.v2 Feb 9 09:44:45.656290 kubelet[1406]: E0209 09:44:45.656259 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:45.673146 systemd[1]: Started cri-containerd-302b4487369405946594cdaa7506f622279bd4acb46eae31f3056901a1e219ab.scope. Feb 9 09:44:45.708960 env[1140]: time="2024-02-09T09:44:45.708916721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xlgqn,Uid:971a50b7-acff-477a-add4-d081f08c36f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"302b4487369405946594cdaa7506f622279bd4acb46eae31f3056901a1e219ab\"" Feb 9 09:44:45.709562 kubelet[1406]: E0209 09:44:45.709540 1406 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:45.711843 env[1140]: time="2024-02-09T09:44:45.711795322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-kj4kl,Uid:71c45dd8-5ea3-4959-8682-a038135a2ca9,Namespace:kube-system,Attempt:0,} returns sandbox id \"faf973b75841cd4035c7088c6c1cbf63b2eaee59db87a66ad975fcfe9160c9b0\"" Feb 9 09:44:45.711946 env[1140]: time="2024-02-09T09:44:45.711819642Z" level=info msg="CreateContainer within sandbox \"302b4487369405946594cdaa7506f622279bd4acb46eae31f3056901a1e219ab\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 09:44:45.712290 kubelet[1406]: E0209 09:44:45.712271 1406 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:45.713025 env[1140]: time="2024-02-09T09:44:45.712977842Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 09:44:45.720899 env[1140]: time="2024-02-09T09:44:45.720808403Z" level=info msg="CreateContainer within sandbox \"302b4487369405946594cdaa7506f622279bd4acb46eae31f3056901a1e219ab\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4981830bafdcaec74ae229d18102ca91f882508f23939899008b1169866cc162\"" Feb 9 09:44:45.721197 env[1140]: time="2024-02-09T09:44:45.721169524Z" level=info msg="StartContainer for \"4981830bafdcaec74ae229d18102ca91f882508f23939899008b1169866cc162\"" Feb 9 09:44:45.734599 systemd[1]: Started cri-containerd-4981830bafdcaec74ae229d18102ca91f882508f23939899008b1169866cc162.scope. Feb 9 09:44:45.760272 systemd[1]: cri-containerd-4981830bafdcaec74ae229d18102ca91f882508f23939899008b1169866cc162.scope: Deactivated successfully. Feb 9 09:44:45.772746 env[1140]: time="2024-02-09T09:44:45.772691493Z" level=info msg="shim disconnected" id=4981830bafdcaec74ae229d18102ca91f882508f23939899008b1169866cc162 Feb 9 09:44:45.772746 env[1140]: time="2024-02-09T09:44:45.772743653Z" level=warning msg="cleaning up after shim disconnected" id=4981830bafdcaec74ae229d18102ca91f882508f23939899008b1169866cc162 namespace=k8s.io Feb 9 09:44:45.772746 env[1140]: time="2024-02-09T09:44:45.772753053Z" level=info msg="cleaning up dead shim" Feb 9 09:44:45.779332 env[1140]: time="2024-02-09T09:44:45.779286814Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:44:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3147 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T09:44:45Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/4981830bafdcaec74ae229d18102ca91f882508f23939899008b1169866cc162/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 09:44:45.779622 env[1140]: time="2024-02-09T09:44:45.779534614Z" level=error msg="copy shim log" error="read /proc/self/fd/57: file already closed" Feb 9 09:44:45.779837 env[1140]: time="2024-02-09T09:44:45.779789615Z" level=error msg="Failed to pipe stderr of container \"4981830bafdcaec74ae229d18102ca91f882508f23939899008b1169866cc162\"" error="reading from a closed fifo" Feb 9 09:44:45.780285 env[1140]: time="2024-02-09T09:44:45.780256335Z" level=error msg="Failed to pipe stdout of container \"4981830bafdcaec74ae229d18102ca91f882508f23939899008b1169866cc162\"" error="reading from a closed fifo" Feb 9 09:44:45.781963 env[1140]: time="2024-02-09T09:44:45.781916015Z" level=error msg="StartContainer for \"4981830bafdcaec74ae229d18102ca91f882508f23939899008b1169866cc162\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 09:44:45.782182 kubelet[1406]: E0209 09:44:45.782146 1406 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="4981830bafdcaec74ae229d18102ca91f882508f23939899008b1169866cc162" Feb 9 09:44:45.782337 kubelet[1406]: E0209 09:44:45.782319 1406 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 09:44:45.782337 kubelet[1406]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 09:44:45.782337 kubelet[1406]: rm /hostbin/cilium-mount Feb 9 09:44:45.782337 kubelet[1406]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-w7s2r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-xlgqn_kube-system(971a50b7-acff-477a-add4-d081f08c36f4): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 09:44:45.782478 kubelet[1406]: E0209 09:44:45.782360 1406 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-xlgqn" podUID=971a50b7-acff-477a-add4-d081f08c36f4 Feb 9 09:44:45.875854 env[1140]: time="2024-02-09T09:44:45.875763433Z" level=info msg="StopPodSandbox for \"302b4487369405946594cdaa7506f622279bd4acb46eae31f3056901a1e219ab\"" Feb 9 09:44:45.876004 env[1140]: time="2024-02-09T09:44:45.875980633Z" level=info msg="Container to stop \"4981830bafdcaec74ae229d18102ca91f882508f23939899008b1169866cc162\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:44:45.883945 systemd[1]: cri-containerd-302b4487369405946594cdaa7506f622279bd4acb46eae31f3056901a1e219ab.scope: Deactivated successfully. Feb 9 09:44:45.910452 env[1140]: time="2024-02-09T09:44:45.910404119Z" level=info msg="shim disconnected" id=302b4487369405946594cdaa7506f622279bd4acb46eae31f3056901a1e219ab Feb 9 09:44:45.910452 env[1140]: time="2024-02-09T09:44:45.910453519Z" level=warning msg="cleaning up after shim disconnected" id=302b4487369405946594cdaa7506f622279bd4acb46eae31f3056901a1e219ab namespace=k8s.io Feb 9 09:44:45.910648 env[1140]: time="2024-02-09T09:44:45.910463519Z" level=info msg="cleaning up dead shim" Feb 9 09:44:45.917151 env[1140]: time="2024-02-09T09:44:45.917114600Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:44:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3178 runtime=io.containerd.runc.v2\n" Feb 9 09:44:45.917440 env[1140]: time="2024-02-09T09:44:45.917405480Z" level=info msg="TearDown network for sandbox \"302b4487369405946594cdaa7506f622279bd4acb46eae31f3056901a1e219ab\" successfully" Feb 9 09:44:45.917440 env[1140]: time="2024-02-09T09:44:45.917432080Z" level=info msg="StopPodSandbox for \"302b4487369405946594cdaa7506f622279bd4acb46eae31f3056901a1e219ab\" returns successfully" Feb 9 09:44:46.090316 kubelet[1406]: I0209 09:44:46.090271 1406 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/971a50b7-acff-477a-add4-d081f08c36f4-xtables-lock\") pod \"971a50b7-acff-477a-add4-d081f08c36f4\" (UID: \"971a50b7-acff-477a-add4-d081f08c36f4\") " Feb 9 09:44:46.090316 kubelet[1406]: I0209 09:44:46.090315 1406 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/971a50b7-acff-477a-add4-d081f08c36f4-etc-cni-netd\") pod \"971a50b7-acff-477a-add4-d081f08c36f4\" (UID: \"971a50b7-acff-477a-add4-d081f08c36f4\") " Feb 9 09:44:46.090519 kubelet[1406]: I0209 09:44:46.090335 1406 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/971a50b7-acff-477a-add4-d081f08c36f4-host-proc-sys-net\") pod \"971a50b7-acff-477a-add4-d081f08c36f4\" (UID: \"971a50b7-acff-477a-add4-d081f08c36f4\") " Feb 9 09:44:46.090519 kubelet[1406]: I0209 09:44:46.090359 1406 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/971a50b7-acff-477a-add4-d081f08c36f4-hubble-tls\") pod \"971a50b7-acff-477a-add4-d081f08c36f4\" (UID: \"971a50b7-acff-477a-add4-d081f08c36f4\") " Feb 9 09:44:46.090519 kubelet[1406]: I0209 09:44:46.090374 1406 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/971a50b7-acff-477a-add4-d081f08c36f4-cilium-run\") pod \"971a50b7-acff-477a-add4-d081f08c36f4\" (UID: \"971a50b7-acff-477a-add4-d081f08c36f4\") " Feb 9 09:44:46.090519 kubelet[1406]: I0209 09:44:46.090391 1406 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/971a50b7-acff-477a-add4-d081f08c36f4-host-proc-sys-kernel\") pod \"971a50b7-acff-477a-add4-d081f08c36f4\" (UID: \"971a50b7-acff-477a-add4-d081f08c36f4\") " Feb 9 09:44:46.090519 kubelet[1406]: I0209 09:44:46.090411 1406 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/971a50b7-acff-477a-add4-d081f08c36f4-clustermesh-secrets\") pod \"971a50b7-acff-477a-add4-d081f08c36f4\" (UID: \"971a50b7-acff-477a-add4-d081f08c36f4\") " Feb 9 09:44:46.090519 kubelet[1406]: I0209 09:44:46.090430 1406 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/971a50b7-acff-477a-add4-d081f08c36f4-cilium-config-path\") pod \"971a50b7-acff-477a-add4-d081f08c36f4\" (UID: \"971a50b7-acff-477a-add4-d081f08c36f4\") " Feb 9 09:44:46.090670 kubelet[1406]: I0209 09:44:46.090448 1406 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/971a50b7-acff-477a-add4-d081f08c36f4-bpf-maps\") pod \"971a50b7-acff-477a-add4-d081f08c36f4\" (UID: \"971a50b7-acff-477a-add4-d081f08c36f4\") " Feb 9 09:44:46.090670 kubelet[1406]: I0209 09:44:46.090465 1406 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/971a50b7-acff-477a-add4-d081f08c36f4-lib-modules\") pod \"971a50b7-acff-477a-add4-d081f08c36f4\" (UID: \"971a50b7-acff-477a-add4-d081f08c36f4\") " Feb 9 09:44:46.090670 kubelet[1406]: I0209 09:44:46.090489 1406 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/971a50b7-acff-477a-add4-d081f08c36f4-cilium-ipsec-secrets\") pod \"971a50b7-acff-477a-add4-d081f08c36f4\" (UID: \"971a50b7-acff-477a-add4-d081f08c36f4\") " Feb 9 09:44:46.090670 kubelet[1406]: I0209 09:44:46.090523 1406 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7s2r\" (UniqueName: \"kubernetes.io/projected/971a50b7-acff-477a-add4-d081f08c36f4-kube-api-access-w7s2r\") pod \"971a50b7-acff-477a-add4-d081f08c36f4\" (UID: \"971a50b7-acff-477a-add4-d081f08c36f4\") " Feb 9 09:44:46.090670 kubelet[1406]: I0209 09:44:46.090539 1406 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/971a50b7-acff-477a-add4-d081f08c36f4-cni-path\") pod \"971a50b7-acff-477a-add4-d081f08c36f4\" (UID: \"971a50b7-acff-477a-add4-d081f08c36f4\") " Feb 9 09:44:46.090670 kubelet[1406]: I0209 09:44:46.090556 1406 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/971a50b7-acff-477a-add4-d081f08c36f4-cilium-cgroup\") pod \"971a50b7-acff-477a-add4-d081f08c36f4\" (UID: \"971a50b7-acff-477a-add4-d081f08c36f4\") " Feb 9 09:44:46.090803 kubelet[1406]: I0209 09:44:46.090572 1406 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/971a50b7-acff-477a-add4-d081f08c36f4-hostproc\") pod \"971a50b7-acff-477a-add4-d081f08c36f4\" (UID: \"971a50b7-acff-477a-add4-d081f08c36f4\") " Feb 9 09:44:46.090803 kubelet[1406]: I0209 09:44:46.090630 1406 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/971a50b7-acff-477a-add4-d081f08c36f4-hostproc" (OuterVolumeSpecName: "hostproc") pod "971a50b7-acff-477a-add4-d081f08c36f4" (UID: "971a50b7-acff-477a-add4-d081f08c36f4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:44:46.090803 kubelet[1406]: I0209 09:44:46.090654 1406 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/971a50b7-acff-477a-add4-d081f08c36f4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "971a50b7-acff-477a-add4-d081f08c36f4" (UID: "971a50b7-acff-477a-add4-d081f08c36f4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:44:46.090803 kubelet[1406]: I0209 09:44:46.090669 1406 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/971a50b7-acff-477a-add4-d081f08c36f4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "971a50b7-acff-477a-add4-d081f08c36f4" (UID: "971a50b7-acff-477a-add4-d081f08c36f4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:44:46.090803 kubelet[1406]: I0209 09:44:46.090684 1406 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/971a50b7-acff-477a-add4-d081f08c36f4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "971a50b7-acff-477a-add4-d081f08c36f4" (UID: "971a50b7-acff-477a-add4-d081f08c36f4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:44:46.091313 kubelet[1406]: I0209 09:44:46.090976 1406 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/971a50b7-acff-477a-add4-d081f08c36f4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "971a50b7-acff-477a-add4-d081f08c36f4" (UID: "971a50b7-acff-477a-add4-d081f08c36f4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:44:46.091313 kubelet[1406]: I0209 09:44:46.091014 1406 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/971a50b7-acff-477a-add4-d081f08c36f4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "971a50b7-acff-477a-add4-d081f08c36f4" (UID: "971a50b7-acff-477a-add4-d081f08c36f4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:44:46.091313 kubelet[1406]: I0209 09:44:46.091037 1406 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/971a50b7-acff-477a-add4-d081f08c36f4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "971a50b7-acff-477a-add4-d081f08c36f4" (UID: "971a50b7-acff-477a-add4-d081f08c36f4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:44:46.091313 kubelet[1406]: I0209 09:44:46.091040 1406 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/971a50b7-acff-477a-add4-d081f08c36f4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "971a50b7-acff-477a-add4-d081f08c36f4" (UID: "971a50b7-acff-477a-add4-d081f08c36f4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:44:46.091313 kubelet[1406]: I0209 09:44:46.091083 1406 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/971a50b7-acff-477a-add4-d081f08c36f4-cni-path" (OuterVolumeSpecName: "cni-path") pod "971a50b7-acff-477a-add4-d081f08c36f4" (UID: "971a50b7-acff-477a-add4-d081f08c36f4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:44:46.091489 kubelet[1406]: W0209 09:44:46.091262 1406 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/971a50b7-acff-477a-add4-d081f08c36f4/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 09:44:46.093123 kubelet[1406]: I0209 09:44:46.093070 1406 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/971a50b7-acff-477a-add4-d081f08c36f4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "971a50b7-acff-477a-add4-d081f08c36f4" (UID: "971a50b7-acff-477a-add4-d081f08c36f4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 09:44:46.093222 kubelet[1406]: I0209 09:44:46.093127 1406 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/971a50b7-acff-477a-add4-d081f08c36f4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "971a50b7-acff-477a-add4-d081f08c36f4" (UID: "971a50b7-acff-477a-add4-d081f08c36f4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:44:46.093557 kubelet[1406]: I0209 09:44:46.093532 1406 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/971a50b7-acff-477a-add4-d081f08c36f4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "971a50b7-acff-477a-add4-d081f08c36f4" (UID: "971a50b7-acff-477a-add4-d081f08c36f4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:44:46.094868 kubelet[1406]: I0209 09:44:46.094838 1406 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/971a50b7-acff-477a-add4-d081f08c36f4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "971a50b7-acff-477a-add4-d081f08c36f4" (UID: "971a50b7-acff-477a-add4-d081f08c36f4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:44:46.094984 kubelet[1406]: I0209 09:44:46.094964 1406 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/971a50b7-acff-477a-add4-d081f08c36f4-kube-api-access-w7s2r" (OuterVolumeSpecName: "kube-api-access-w7s2r") pod "971a50b7-acff-477a-add4-d081f08c36f4" (UID: "971a50b7-acff-477a-add4-d081f08c36f4"). InnerVolumeSpecName "kube-api-access-w7s2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:44:46.098712 kubelet[1406]: I0209 09:44:46.098661 1406 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/971a50b7-acff-477a-add4-d081f08c36f4-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "971a50b7-acff-477a-add4-d081f08c36f4" (UID: "971a50b7-acff-477a-add4-d081f08c36f4"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:44:46.190826 kubelet[1406]: I0209 09:44:46.190779 1406 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/971a50b7-acff-477a-add4-d081f08c36f4-hubble-tls\") on node \"10.0.0.14\" DevicePath \"\"" Feb 9 09:44:46.190826 kubelet[1406]: I0209 09:44:46.190814 1406 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/971a50b7-acff-477a-add4-d081f08c36f4-cilium-run\") on node \"10.0.0.14\" DevicePath \"\"" Feb 9 09:44:46.190826 kubelet[1406]: I0209 09:44:46.190824 1406 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/971a50b7-acff-477a-add4-d081f08c36f4-xtables-lock\") on node \"10.0.0.14\" DevicePath \"\"" Feb 9 09:44:46.190826 kubelet[1406]: I0209 09:44:46.190833 1406 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/971a50b7-acff-477a-add4-d081f08c36f4-etc-cni-netd\") on node \"10.0.0.14\" DevicePath \"\"" Feb 9 09:44:46.191057 kubelet[1406]: I0209 09:44:46.190844 1406 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/971a50b7-acff-477a-add4-d081f08c36f4-host-proc-sys-net\") on node \"10.0.0.14\" DevicePath \"\"" Feb 9 09:44:46.191057 kubelet[1406]: I0209 09:44:46.190854 1406 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/971a50b7-acff-477a-add4-d081f08c36f4-host-proc-sys-kernel\") on node \"10.0.0.14\" DevicePath \"\"" Feb 9 09:44:46.191057 kubelet[1406]: I0209 09:44:46.190863 1406 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/971a50b7-acff-477a-add4-d081f08c36f4-lib-modules\") on node \"10.0.0.14\" DevicePath \"\"" Feb 9 09:44:46.191057 kubelet[1406]: I0209 09:44:46.190873 1406 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/971a50b7-acff-477a-add4-d081f08c36f4-clustermesh-secrets\") on node \"10.0.0.14\" DevicePath \"\"" Feb 9 09:44:46.191057 kubelet[1406]: I0209 09:44:46.190883 1406 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/971a50b7-acff-477a-add4-d081f08c36f4-cilium-config-path\") on node \"10.0.0.14\" DevicePath \"\"" Feb 9 09:44:46.191057 kubelet[1406]: I0209 09:44:46.190894 1406 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/971a50b7-acff-477a-add4-d081f08c36f4-bpf-maps\") on node \"10.0.0.14\" DevicePath \"\"" Feb 9 09:44:46.191057 kubelet[1406]: I0209 09:44:46.190902 1406 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/971a50b7-acff-477a-add4-d081f08c36f4-cni-path\") on node \"10.0.0.14\" DevicePath \"\"" Feb 9 09:44:46.191057 kubelet[1406]: I0209 09:44:46.190911 1406 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/971a50b7-acff-477a-add4-d081f08c36f4-cilium-ipsec-secrets\") on node \"10.0.0.14\" DevicePath \"\"" Feb 9 09:44:46.191275 kubelet[1406]: I0209 09:44:46.190921 1406 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-w7s2r\" (UniqueName: \"kubernetes.io/projected/971a50b7-acff-477a-add4-d081f08c36f4-kube-api-access-w7s2r\") on node \"10.0.0.14\" DevicePath \"\"" Feb 9 09:44:46.191275 kubelet[1406]: I0209 09:44:46.190930 1406 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/971a50b7-acff-477a-add4-d081f08c36f4-cilium-cgroup\") on node \"10.0.0.14\" DevicePath \"\"" Feb 9 09:44:46.191275 kubelet[1406]: I0209 09:44:46.190938 1406 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/971a50b7-acff-477a-add4-d081f08c36f4-hostproc\") on node \"10.0.0.14\" DevicePath \"\"" Feb 9 09:44:46.590361 systemd[1]: var-lib-kubelet-pods-971a50b7\x2dacff\x2d477a\x2dadd4\x2dd081f08c36f4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw7s2r.mount: Deactivated successfully. Feb 9 09:44:46.590448 systemd[1]: var-lib-kubelet-pods-971a50b7\x2dacff\x2d477a\x2dadd4\x2dd081f08c36f4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 09:44:46.590500 systemd[1]: var-lib-kubelet-pods-971a50b7\x2dacff\x2d477a\x2dadd4\x2dd081f08c36f4-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 09:44:46.590551 systemd[1]: var-lib-kubelet-pods-971a50b7\x2dacff\x2d477a\x2dadd4\x2dd081f08c36f4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 09:44:46.601536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3653518749.mount: Deactivated successfully. Feb 9 09:44:46.657092 kubelet[1406]: E0209 09:44:46.657023 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:46.879293 kubelet[1406]: I0209 09:44:46.878698 1406 scope.go:115] "RemoveContainer" containerID="4981830bafdcaec74ae229d18102ca91f882508f23939899008b1169866cc162" Feb 9 09:44:46.882593 env[1140]: time="2024-02-09T09:44:46.881717852Z" level=info msg="RemoveContainer for \"4981830bafdcaec74ae229d18102ca91f882508f23939899008b1169866cc162\"" Feb 9 09:44:46.882175 systemd[1]: Removed slice kubepods-burstable-pod971a50b7_acff_477a_add4_d081f08c36f4.slice. Feb 9 09:44:46.884331 env[1140]: time="2024-02-09T09:44:46.884301252Z" level=info msg="RemoveContainer for \"4981830bafdcaec74ae229d18102ca91f882508f23939899008b1169866cc162\" returns successfully" Feb 9 09:44:46.906540 kubelet[1406]: I0209 09:44:46.906511 1406 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:44:46.906670 kubelet[1406]: E0209 09:44:46.906559 1406 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="971a50b7-acff-477a-add4-d081f08c36f4" containerName="mount-cgroup" Feb 9 09:44:46.906670 kubelet[1406]: I0209 09:44:46.906580 1406 memory_manager.go:346] "RemoveStaleState removing state" podUID="971a50b7-acff-477a-add4-d081f08c36f4" containerName="mount-cgroup" Feb 9 09:44:46.911107 systemd[1]: Created slice kubepods-burstable-pod49134e95_8775_4a44_afd9_cefb05ed2b2d.slice. Feb 9 09:44:47.055229 env[1140]: time="2024-02-09T09:44:47.055165202Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:44:47.056634 env[1140]: time="2024-02-09T09:44:47.056608362Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:44:47.058204 env[1140]: time="2024-02-09T09:44:47.058155842Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:44:47.058674 env[1140]: time="2024-02-09T09:44:47.058642803Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 9 09:44:47.060548 env[1140]: time="2024-02-09T09:44:47.060517963Z" level=info msg="CreateContainer within sandbox \"faf973b75841cd4035c7088c6c1cbf63b2eaee59db87a66ad975fcfe9160c9b0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 09:44:47.068500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3545818248.mount: Deactivated successfully. Feb 9 09:44:47.070160 env[1140]: time="2024-02-09T09:44:47.070121524Z" level=info msg="CreateContainer within sandbox \"faf973b75841cd4035c7088c6c1cbf63b2eaee59db87a66ad975fcfe9160c9b0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f3b33d241d9691d880710330905edf7eb07f0e766510d6f90aa7725d51930205\"" Feb 9 09:44:47.070612 env[1140]: time="2024-02-09T09:44:47.070582165Z" level=info msg="StartContainer for \"f3b33d241d9691d880710330905edf7eb07f0e766510d6f90aa7725d51930205\"" Feb 9 09:44:47.086388 systemd[1]: Started cri-containerd-f3b33d241d9691d880710330905edf7eb07f0e766510d6f90aa7725d51930205.scope. Feb 9 09:44:47.096095 kubelet[1406]: I0209 09:44:47.095572 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/49134e95-8775-4a44-afd9-cefb05ed2b2d-hostproc\") pod \"cilium-lz47w\" (UID: \"49134e95-8775-4a44-afd9-cefb05ed2b2d\") " pod="kube-system/cilium-lz47w" Feb 9 09:44:47.096095 kubelet[1406]: I0209 09:44:47.095614 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/49134e95-8775-4a44-afd9-cefb05ed2b2d-cilium-cgroup\") pod \"cilium-lz47w\" (UID: \"49134e95-8775-4a44-afd9-cefb05ed2b2d\") " pod="kube-system/cilium-lz47w" Feb 9 09:44:47.096095 kubelet[1406]: I0209 09:44:47.095636 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49134e95-8775-4a44-afd9-cefb05ed2b2d-lib-modules\") pod \"cilium-lz47w\" (UID: \"49134e95-8775-4a44-afd9-cefb05ed2b2d\") " pod="kube-system/cilium-lz47w" Feb 9 09:44:47.096095 kubelet[1406]: I0209 09:44:47.095657 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/49134e95-8775-4a44-afd9-cefb05ed2b2d-cilium-ipsec-secrets\") pod \"cilium-lz47w\" (UID: \"49134e95-8775-4a44-afd9-cefb05ed2b2d\") " pod="kube-system/cilium-lz47w" Feb 9 09:44:47.096095 kubelet[1406]: I0209 09:44:47.095678 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/49134e95-8775-4a44-afd9-cefb05ed2b2d-cilium-run\") pod \"cilium-lz47w\" (UID: \"49134e95-8775-4a44-afd9-cefb05ed2b2d\") " pod="kube-system/cilium-lz47w" Feb 9 09:44:47.096095 kubelet[1406]: I0209 09:44:47.095697 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/49134e95-8775-4a44-afd9-cefb05ed2b2d-bpf-maps\") pod \"cilium-lz47w\" (UID: \"49134e95-8775-4a44-afd9-cefb05ed2b2d\") " pod="kube-system/cilium-lz47w" Feb 9 09:44:47.096384 kubelet[1406]: I0209 09:44:47.095719 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nghjg\" (UniqueName: \"kubernetes.io/projected/49134e95-8775-4a44-afd9-cefb05ed2b2d-kube-api-access-nghjg\") pod \"cilium-lz47w\" (UID: \"49134e95-8775-4a44-afd9-cefb05ed2b2d\") " pod="kube-system/cilium-lz47w" Feb 9 09:44:47.096384 kubelet[1406]: I0209 09:44:47.095737 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/49134e95-8775-4a44-afd9-cefb05ed2b2d-etc-cni-netd\") pod \"cilium-lz47w\" (UID: \"49134e95-8775-4a44-afd9-cefb05ed2b2d\") " pod="kube-system/cilium-lz47w" Feb 9 09:44:47.096384 kubelet[1406]: I0209 09:44:47.095754 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49134e95-8775-4a44-afd9-cefb05ed2b2d-xtables-lock\") pod \"cilium-lz47w\" (UID: \"49134e95-8775-4a44-afd9-cefb05ed2b2d\") " pod="kube-system/cilium-lz47w" Feb 9 09:44:47.096384 kubelet[1406]: I0209 09:44:47.095773 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/49134e95-8775-4a44-afd9-cefb05ed2b2d-cilium-config-path\") pod \"cilium-lz47w\" (UID: \"49134e95-8775-4a44-afd9-cefb05ed2b2d\") " pod="kube-system/cilium-lz47w" Feb 9 09:44:47.096384 kubelet[1406]: I0209 09:44:47.095795 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/49134e95-8775-4a44-afd9-cefb05ed2b2d-clustermesh-secrets\") pod \"cilium-lz47w\" (UID: \"49134e95-8775-4a44-afd9-cefb05ed2b2d\") " pod="kube-system/cilium-lz47w" Feb 9 09:44:47.096384 kubelet[1406]: I0209 09:44:47.095815 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/49134e95-8775-4a44-afd9-cefb05ed2b2d-hubble-tls\") pod \"cilium-lz47w\" (UID: \"49134e95-8775-4a44-afd9-cefb05ed2b2d\") " pod="kube-system/cilium-lz47w" Feb 9 09:44:47.096522 kubelet[1406]: I0209 09:44:47.095836 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/49134e95-8775-4a44-afd9-cefb05ed2b2d-cni-path\") pod \"cilium-lz47w\" (UID: \"49134e95-8775-4a44-afd9-cefb05ed2b2d\") " pod="kube-system/cilium-lz47w" Feb 9 09:44:47.096522 kubelet[1406]: I0209 09:44:47.095867 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/49134e95-8775-4a44-afd9-cefb05ed2b2d-host-proc-sys-net\") pod \"cilium-lz47w\" (UID: \"49134e95-8775-4a44-afd9-cefb05ed2b2d\") " pod="kube-system/cilium-lz47w" Feb 9 09:44:47.096522 kubelet[1406]: I0209 09:44:47.095891 1406 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/49134e95-8775-4a44-afd9-cefb05ed2b2d-host-proc-sys-kernel\") pod \"cilium-lz47w\" (UID: \"49134e95-8775-4a44-afd9-cefb05ed2b2d\") " pod="kube-system/cilium-lz47w" Feb 9 09:44:47.160881 env[1140]: time="2024-02-09T09:44:47.160524259Z" level=info msg="StartContainer for \"f3b33d241d9691d880710330905edf7eb07f0e766510d6f90aa7725d51930205\" returns successfully" Feb 9 09:44:47.657296 kubelet[1406]: E0209 09:44:47.657248 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:47.670830 kubelet[1406]: E0209 09:44:47.670796 1406 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 09:44:47.753975 env[1140]: time="2024-02-09T09:44:47.753834638Z" level=info msg="StopPodSandbox for \"302b4487369405946594cdaa7506f622279bd4acb46eae31f3056901a1e219ab\"" Feb 9 09:44:47.753975 env[1140]: time="2024-02-09T09:44:47.753932718Z" level=info msg="TearDown network for sandbox \"302b4487369405946594cdaa7506f622279bd4acb46eae31f3056901a1e219ab\" successfully" Feb 9 09:44:47.753975 env[1140]: time="2024-02-09T09:44:47.753965238Z" level=info msg="StopPodSandbox for \"302b4487369405946594cdaa7506f622279bd4acb46eae31f3056901a1e219ab\" returns successfully" Feb 9 09:44:47.755168 kubelet[1406]: I0209 09:44:47.755120 1406 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=971a50b7-acff-477a-add4-d081f08c36f4 path="/var/lib/kubelet/pods/971a50b7-acff-477a-add4-d081f08c36f4/volumes" Feb 9 09:44:47.821685 kubelet[1406]: E0209 09:44:47.821619 1406 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:47.822148 env[1140]: time="2024-02-09T09:44:47.822101929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lz47w,Uid:49134e95-8775-4a44-afd9-cefb05ed2b2d,Namespace:kube-system,Attempt:0,}" Feb 9 09:44:47.839403 env[1140]: time="2024-02-09T09:44:47.839316292Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:44:47.839403 env[1140]: time="2024-02-09T09:44:47.839358852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:44:47.839403 env[1140]: time="2024-02-09T09:44:47.839369412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:44:47.839626 env[1140]: time="2024-02-09T09:44:47.839491892Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f3bacca38b7a657a50fa217ff002f229da0950f7fb96dd9a7019f617f3529fec pid=3246 runtime=io.containerd.runc.v2 Feb 9 09:44:47.858323 systemd[1]: Started cri-containerd-f3bacca38b7a657a50fa217ff002f229da0950f7fb96dd9a7019f617f3529fec.scope. Feb 9 09:44:47.883813 kubelet[1406]: E0209 09:44:47.883770 1406 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:47.899706 env[1140]: time="2024-02-09T09:44:47.899664982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lz47w,Uid:49134e95-8775-4a44-afd9-cefb05ed2b2d,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3bacca38b7a657a50fa217ff002f229da0950f7fb96dd9a7019f617f3529fec\"" Feb 9 09:44:47.900449 kubelet[1406]: E0209 09:44:47.900432 1406 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:47.902251 env[1140]: time="2024-02-09T09:44:47.902178262Z" level=info msg="CreateContainer within sandbox \"f3bacca38b7a657a50fa217ff002f229da0950f7fb96dd9a7019f617f3529fec\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 09:44:47.921196 env[1140]: time="2024-02-09T09:44:47.921093105Z" level=info msg="CreateContainer within sandbox \"f3bacca38b7a657a50fa217ff002f229da0950f7fb96dd9a7019f617f3529fec\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e00cd558026cf3a371302c08bd3cce22c96a95cd395e46d96d621efea92f6927\"" Feb 9 09:44:47.922619 env[1140]: time="2024-02-09T09:44:47.922546065Z" level=info msg="StartContainer for \"e00cd558026cf3a371302c08bd3cce22c96a95cd395e46d96d621efea92f6927\"" Feb 9 09:44:47.923586 kubelet[1406]: I0209 09:44:47.923555 1406 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-kj4kl" podStartSLOduration=-9.223372033931263e+09 pod.CreationTimestamp="2024-02-09 09:44:45 +0000 UTC" firstStartedPulling="2024-02-09 09:44:45.712738162 +0000 UTC m=+68.970314998" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:44:47.923379066 +0000 UTC m=+71.180955902" watchObservedRunningTime="2024-02-09 09:44:47.923512506 +0000 UTC m=+71.181089342" Feb 9 09:44:47.940162 systemd[1]: Started cri-containerd-e00cd558026cf3a371302c08bd3cce22c96a95cd395e46d96d621efea92f6927.scope. Feb 9 09:44:48.026246 env[1140]: time="2024-02-09T09:44:48.026199722Z" level=info msg="StartContainer for \"e00cd558026cf3a371302c08bd3cce22c96a95cd395e46d96d621efea92f6927\" returns successfully" Feb 9 09:44:48.031208 systemd[1]: cri-containerd-e00cd558026cf3a371302c08bd3cce22c96a95cd395e46d96d621efea92f6927.scope: Deactivated successfully. Feb 9 09:44:48.052119 env[1140]: time="2024-02-09T09:44:48.052058766Z" level=info msg="shim disconnected" id=e00cd558026cf3a371302c08bd3cce22c96a95cd395e46d96d621efea92f6927 Feb 9 09:44:48.052119 env[1140]: time="2024-02-09T09:44:48.052104286Z" level=warning msg="cleaning up after shim disconnected" id=e00cd558026cf3a371302c08bd3cce22c96a95cd395e46d96d621efea92f6927 namespace=k8s.io Feb 9 09:44:48.052119 env[1140]: time="2024-02-09T09:44:48.052114846Z" level=info msg="cleaning up dead shim" Feb 9 09:44:48.058714 env[1140]: time="2024-02-09T09:44:48.058661407Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:44:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3329 runtime=io.containerd.runc.v2\n" Feb 9 09:44:48.589933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2175267424.mount: Deactivated successfully. Feb 9 09:44:48.658010 kubelet[1406]: E0209 09:44:48.657957 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:48.887365 kubelet[1406]: E0209 09:44:48.886527 1406 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:48.887365 kubelet[1406]: E0209 09:44:48.887048 1406 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:48.888460 kubelet[1406]: W0209 09:44:48.888434 1406 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod971a50b7_acff_477a_add4_d081f08c36f4.slice/cri-containerd-4981830bafdcaec74ae229d18102ca91f882508f23939899008b1169866cc162.scope WatchSource:0}: container "4981830bafdcaec74ae229d18102ca91f882508f23939899008b1169866cc162" in namespace "k8s.io": not found Feb 9 09:44:48.889478 env[1140]: time="2024-02-09T09:44:48.889435536Z" level=info msg="CreateContainer within sandbox \"f3bacca38b7a657a50fa217ff002f229da0950f7fb96dd9a7019f617f3529fec\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 09:44:48.905445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2938308300.mount: Deactivated successfully. Feb 9 09:44:48.907965 env[1140]: time="2024-02-09T09:44:48.907914539Z" level=info msg="CreateContainer within sandbox \"f3bacca38b7a657a50fa217ff002f229da0950f7fb96dd9a7019f617f3529fec\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bc8e8bec11c1648496ae94f4a8e6a09f99ade45e170f8cffe1b7898347cc98c0\"" Feb 9 09:44:48.908599 env[1140]: time="2024-02-09T09:44:48.908570179Z" level=info msg="StartContainer for \"bc8e8bec11c1648496ae94f4a8e6a09f99ade45e170f8cffe1b7898347cc98c0\"" Feb 9 09:44:48.923792 systemd[1]: Started cri-containerd-bc8e8bec11c1648496ae94f4a8e6a09f99ade45e170f8cffe1b7898347cc98c0.scope. Feb 9 09:44:48.959479 env[1140]: time="2024-02-09T09:44:48.959434307Z" level=info msg="StartContainer for \"bc8e8bec11c1648496ae94f4a8e6a09f99ade45e170f8cffe1b7898347cc98c0\" returns successfully" Feb 9 09:44:48.963553 systemd[1]: cri-containerd-bc8e8bec11c1648496ae94f4a8e6a09f99ade45e170f8cffe1b7898347cc98c0.scope: Deactivated successfully. Feb 9 09:44:48.979899 env[1140]: time="2024-02-09T09:44:48.979857590Z" level=info msg="shim disconnected" id=bc8e8bec11c1648496ae94f4a8e6a09f99ade45e170f8cffe1b7898347cc98c0 Feb 9 09:44:48.979899 env[1140]: time="2024-02-09T09:44:48.979899950Z" level=warning msg="cleaning up after shim disconnected" id=bc8e8bec11c1648496ae94f4a8e6a09f99ade45e170f8cffe1b7898347cc98c0 namespace=k8s.io Feb 9 09:44:48.980140 env[1140]: time="2024-02-09T09:44:48.979909390Z" level=info msg="cleaning up dead shim" Feb 9 09:44:48.986638 env[1140]: time="2024-02-09T09:44:48.986586951Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:44:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3396 runtime=io.containerd.runc.v2\n" Feb 9 09:44:49.590013 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc8e8bec11c1648496ae94f4a8e6a09f99ade45e170f8cffe1b7898347cc98c0-rootfs.mount: Deactivated successfully. Feb 9 09:44:49.658137 kubelet[1406]: E0209 09:44:49.658085 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:49.889481 kubelet[1406]: E0209 09:44:49.889394 1406 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:49.891950 env[1140]: time="2024-02-09T09:44:49.891913283Z" level=info msg="CreateContainer within sandbox \"f3bacca38b7a657a50fa217ff002f229da0950f7fb96dd9a7019f617f3529fec\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 09:44:49.907321 env[1140]: time="2024-02-09T09:44:49.907280365Z" level=info msg="CreateContainer within sandbox \"f3bacca38b7a657a50fa217ff002f229da0950f7fb96dd9a7019f617f3529fec\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"49923821470f161b232d6c491c93812d98cdb3ae1c7f12349e6df4e5c255b276\"" Feb 9 09:44:49.907825 env[1140]: time="2024-02-09T09:44:49.907803165Z" level=info msg="StartContainer for \"49923821470f161b232d6c491c93812d98cdb3ae1c7f12349e6df4e5c255b276\"" Feb 9 09:44:49.926248 systemd[1]: Started cri-containerd-49923821470f161b232d6c491c93812d98cdb3ae1c7f12349e6df4e5c255b276.scope. Feb 9 09:44:49.964597 env[1140]: time="2024-02-09T09:44:49.964557854Z" level=info msg="StartContainer for \"49923821470f161b232d6c491c93812d98cdb3ae1c7f12349e6df4e5c255b276\" returns successfully" Feb 9 09:44:49.966420 systemd[1]: cri-containerd-49923821470f161b232d6c491c93812d98cdb3ae1c7f12349e6df4e5c255b276.scope: Deactivated successfully. Feb 9 09:44:49.987325 env[1140]: time="2024-02-09T09:44:49.987263777Z" level=info msg="shim disconnected" id=49923821470f161b232d6c491c93812d98cdb3ae1c7f12349e6df4e5c255b276 Feb 9 09:44:49.987541 env[1140]: time="2024-02-09T09:44:49.987522777Z" level=warning msg="cleaning up after shim disconnected" id=49923821470f161b232d6c491c93812d98cdb3ae1c7f12349e6df4e5c255b276 namespace=k8s.io Feb 9 09:44:49.987607 env[1140]: time="2024-02-09T09:44:49.987594177Z" level=info msg="cleaning up dead shim" Feb 9 09:44:49.994004 env[1140]: time="2024-02-09T09:44:49.993971018Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:44:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3453 runtime=io.containerd.runc.v2\n" Feb 9 09:44:50.590075 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49923821470f161b232d6c491c93812d98cdb3ae1c7f12349e6df4e5c255b276-rootfs.mount: Deactivated successfully. Feb 9 09:44:50.659268 kubelet[1406]: E0209 09:44:50.659221 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:50.893288 kubelet[1406]: E0209 09:44:50.893181 1406 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:50.895335 env[1140]: time="2024-02-09T09:44:50.895139461Z" level=info msg="CreateContainer within sandbox \"f3bacca38b7a657a50fa217ff002f229da0950f7fb96dd9a7019f617f3529fec\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 09:44:50.920880 env[1140]: time="2024-02-09T09:44:50.920733624Z" level=info msg="CreateContainer within sandbox \"f3bacca38b7a657a50fa217ff002f229da0950f7fb96dd9a7019f617f3529fec\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1a9e5dce82b50ad2cf43e9eabe65143632c3049b0b4516404de8a5fe9724bf89\"" Feb 9 09:44:50.921443 env[1140]: time="2024-02-09T09:44:50.921406584Z" level=info msg="StartContainer for \"1a9e5dce82b50ad2cf43e9eabe65143632c3049b0b4516404de8a5fe9724bf89\"" Feb 9 09:44:50.937692 systemd[1]: Started cri-containerd-1a9e5dce82b50ad2cf43e9eabe65143632c3049b0b4516404de8a5fe9724bf89.scope. Feb 9 09:44:50.971614 systemd[1]: cri-containerd-1a9e5dce82b50ad2cf43e9eabe65143632c3049b0b4516404de8a5fe9724bf89.scope: Deactivated successfully. Feb 9 09:44:50.973412 env[1140]: time="2024-02-09T09:44:50.973281992Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49134e95_8775_4a44_afd9_cefb05ed2b2d.slice/cri-containerd-1a9e5dce82b50ad2cf43e9eabe65143632c3049b0b4516404de8a5fe9724bf89.scope/memory.events\": no such file or directory" Feb 9 09:44:50.974920 env[1140]: time="2024-02-09T09:44:50.974884152Z" level=info msg="StartContainer for \"1a9e5dce82b50ad2cf43e9eabe65143632c3049b0b4516404de8a5fe9724bf89\" returns successfully" Feb 9 09:44:50.992794 env[1140]: time="2024-02-09T09:44:50.992748554Z" level=info msg="shim disconnected" id=1a9e5dce82b50ad2cf43e9eabe65143632c3049b0b4516404de8a5fe9724bf89 Feb 9 09:44:50.992794 env[1140]: time="2024-02-09T09:44:50.992795914Z" level=warning msg="cleaning up after shim disconnected" id=1a9e5dce82b50ad2cf43e9eabe65143632c3049b0b4516404de8a5fe9724bf89 namespace=k8s.io Feb 9 09:44:50.992990 env[1140]: time="2024-02-09T09:44:50.992806394Z" level=info msg="cleaning up dead shim" Feb 9 09:44:50.999410 env[1140]: time="2024-02-09T09:44:50.999373355Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:44:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3508 runtime=io.containerd.runc.v2\n" Feb 9 09:44:51.659982 kubelet[1406]: E0209 09:44:51.659929 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:51.896960 kubelet[1406]: E0209 09:44:51.896933 1406 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:51.899082 env[1140]: time="2024-02-09T09:44:51.899025510Z" level=info msg="CreateContainer within sandbox \"f3bacca38b7a657a50fa217ff002f229da0950f7fb96dd9a7019f617f3529fec\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 09:44:51.910721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2094900931.mount: Deactivated successfully. Feb 9 09:44:51.912037 env[1140]: time="2024-02-09T09:44:51.911991352Z" level=info msg="CreateContainer within sandbox \"f3bacca38b7a657a50fa217ff002f229da0950f7fb96dd9a7019f617f3529fec\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"37d001f75ccff78c7dea974992361ed287173a16a1f723414856fb52ae3ddd6c\"" Feb 9 09:44:51.912514 env[1140]: time="2024-02-09T09:44:51.912486272Z" level=info msg="StartContainer for \"37d001f75ccff78c7dea974992361ed287173a16a1f723414856fb52ae3ddd6c\"" Feb 9 09:44:51.930130 systemd[1]: Started cri-containerd-37d001f75ccff78c7dea974992361ed287173a16a1f723414856fb52ae3ddd6c.scope. Feb 9 09:44:51.968068 env[1140]: time="2024-02-09T09:44:51.968024079Z" level=info msg="StartContainer for \"37d001f75ccff78c7dea974992361ed287173a16a1f723414856fb52ae3ddd6c\" returns successfully" Feb 9 09:44:52.002307 kubelet[1406]: W0209 09:44:52.000898 1406 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49134e95_8775_4a44_afd9_cefb05ed2b2d.slice/cri-containerd-e00cd558026cf3a371302c08bd3cce22c96a95cd395e46d96d621efea92f6927.scope WatchSource:0}: task e00cd558026cf3a371302c08bd3cce22c96a95cd395e46d96d621efea92f6927 not found: not found Feb 9 09:44:52.002307 kubelet[1406]: I0209 09:44:52.000959 1406 setters.go:548] "Node became not ready" node="10.0.0.14" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 09:44:52.000919883 +0000 UTC m=+75.258496719 LastTransitionTime:2024-02-09 09:44:52.000919883 +0000 UTC m=+75.258496719 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 09:44:52.199269 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 9 09:44:52.590265 systemd[1]: run-containerd-runc-k8s.io-37d001f75ccff78c7dea974992361ed287173a16a1f723414856fb52ae3ddd6c-runc.5zOSIW.mount: Deactivated successfully. Feb 9 09:44:52.660684 kubelet[1406]: E0209 09:44:52.660606 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:52.901259 kubelet[1406]: E0209 09:44:52.901148 1406 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:53.661009 kubelet[1406]: E0209 09:44:53.660921 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:53.817945 systemd[1]: run-containerd-runc-k8s.io-37d001f75ccff78c7dea974992361ed287173a16a1f723414856fb52ae3ddd6c-runc.jDcJcf.mount: Deactivated successfully. Feb 9 09:44:53.902458 kubelet[1406]: E0209 09:44:53.902378 1406 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:54.662085 kubelet[1406]: E0209 09:44:54.662042 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:54.803558 systemd-networkd[1055]: lxc_health: Link UP Feb 9 09:44:54.814361 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 09:44:54.813848 systemd-networkd[1055]: lxc_health: Gained carrier Feb 9 09:44:54.903588 kubelet[1406]: E0209 09:44:54.903562 1406 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:55.109734 kubelet[1406]: W0209 09:44:55.109689 1406 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49134e95_8775_4a44_afd9_cefb05ed2b2d.slice/cri-containerd-bc8e8bec11c1648496ae94f4a8e6a09f99ade45e170f8cffe1b7898347cc98c0.scope WatchSource:0}: task bc8e8bec11c1648496ae94f4a8e6a09f99ade45e170f8cffe1b7898347cc98c0 not found: not found Feb 9 09:44:55.662879 kubelet[1406]: E0209 09:44:55.662833 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:55.838128 kubelet[1406]: I0209 09:44:55.837870 1406 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-lz47w" podStartSLOduration=9.837836183 pod.CreationTimestamp="2024-02-09 09:44:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:44:52.913985953 +0000 UTC m=+76.171562789" watchObservedRunningTime="2024-02-09 09:44:55.837836183 +0000 UTC m=+79.095413019" Feb 9 09:44:55.905325 kubelet[1406]: E0209 09:44:55.905144 1406 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:56.635398 systemd-networkd[1055]: lxc_health: Gained IPv6LL Feb 9 09:44:56.663257 kubelet[1406]: E0209 09:44:56.663209 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:56.907890 kubelet[1406]: E0209 09:44:56.907765 1406 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:57.606083 kubelet[1406]: E0209 09:44:57.606038 1406 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:57.663395 kubelet[1406]: E0209 09:44:57.663338 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:58.221083 kubelet[1406]: W0209 09:44:58.221034 1406 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49134e95_8775_4a44_afd9_cefb05ed2b2d.slice/cri-containerd-49923821470f161b232d6c491c93812d98cdb3ae1c7f12349e6df4e5c255b276.scope WatchSource:0}: task 49923821470f161b232d6c491c93812d98cdb3ae1c7f12349e6df4e5c255b276 not found: not found Feb 9 09:44:58.664290 kubelet[1406]: E0209 09:44:58.664149 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:44:59.665300 kubelet[1406]: E0209 09:44:59.665174 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:45:00.665916 kubelet[1406]: E0209 09:45:00.665859 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:45:01.327828 kubelet[1406]: W0209 09:45:01.327781 1406 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49134e95_8775_4a44_afd9_cefb05ed2b2d.slice/cri-containerd-1a9e5dce82b50ad2cf43e9eabe65143632c3049b0b4516404de8a5fe9724bf89.scope WatchSource:0}: task 1a9e5dce82b50ad2cf43e9eabe65143632c3049b0b4516404de8a5fe9724bf89 not found: not found Feb 9 09:45:01.666621 kubelet[1406]: E0209 09:45:01.666457 1406 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"