Feb  9 18:42:34.716117 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1]
Feb  9 18:42:34.716137 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 17:24:35 -00 2024
Feb  9 18:42:34.716144 kernel: efi: EFI v2.70 by EDK II
Feb  9 18:42:34.716150 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 
Feb  9 18:42:34.716155 kernel: random: crng init done
Feb  9 18:42:34.716160 kernel: ACPI: Early table checksum verification disabled
Feb  9 18:42:34.716167 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS )
Feb  9 18:42:34.716173 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS  BXPC     00000001      01000013)
Feb  9 18:42:34.716179 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  9 18:42:34.716184 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  9 18:42:34.716190 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  9 18:42:34.716195 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  9 18:42:34.716200 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  9 18:42:34.716206 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  9 18:42:34.716213 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  9 18:42:34.716219 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  9 18:42:34.716225 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  9 18:42:34.716231 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600
Feb  9 18:42:34.716236 kernel: NUMA: Failed to initialise from firmware
Feb  9 18:42:34.716242 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff]
Feb  9 18:42:34.716248 kernel: NUMA: NODE_DATA [mem 0xdcb09900-0xdcb0efff]
Feb  9 18:42:34.716254 kernel: Zone ranges:
Feb  9 18:42:34.716259 kernel:   DMA      [mem 0x0000000040000000-0x00000000dcffffff]
Feb  9 18:42:34.716266 kernel:   DMA32    empty
Feb  9 18:42:34.716272 kernel:   Normal   empty
Feb  9 18:42:34.716278 kernel: Movable zone start for each node
Feb  9 18:42:34.716283 kernel: Early memory node ranges
Feb  9 18:42:34.716289 kernel:   node   0: [mem 0x0000000040000000-0x00000000d924ffff]
Feb  9 18:42:34.716295 kernel:   node   0: [mem 0x00000000d9250000-0x00000000d951ffff]
Feb  9 18:42:34.716300 kernel:   node   0: [mem 0x00000000d9520000-0x00000000dc7fffff]
Feb  9 18:42:34.716306 kernel:   node   0: [mem 0x00000000dc800000-0x00000000dc88ffff]
Feb  9 18:42:34.716311 kernel:   node   0: [mem 0x00000000dc890000-0x00000000dc89ffff]
Feb  9 18:42:34.716317 kernel:   node   0: [mem 0x00000000dc8a0000-0x00000000dc9bffff]
Feb  9 18:42:34.716323 kernel:   node   0: [mem 0x00000000dc9c0000-0x00000000dcffffff]
Feb  9 18:42:34.716328 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff]
Feb  9 18:42:34.716335 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges
Feb  9 18:42:34.716341 kernel: psci: probing for conduit method from ACPI.
Feb  9 18:42:34.716347 kernel: psci: PSCIv1.1 detected in firmware.
Feb  9 18:42:34.716352 kernel: psci: Using standard PSCI v0.2 function IDs
Feb  9 18:42:34.716358 kernel: psci: Trusted OS migration not required
Feb  9 18:42:34.716366 kernel: psci: SMC Calling Convention v1.1
Feb  9 18:42:34.716372 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003)
Feb  9 18:42:34.716380 kernel: ACPI: SRAT not present
Feb  9 18:42:34.716386 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784
Feb  9 18:42:34.716392 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096
Feb  9 18:42:34.716398 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 
Feb  9 18:42:34.716404 kernel: Detected PIPT I-cache on CPU0
Feb  9 18:42:34.716410 kernel: CPU features: detected: GIC system register CPU interface
Feb  9 18:42:34.716416 kernel: CPU features: detected: Hardware dirty bit management
Feb  9 18:42:34.716422 kernel: CPU features: detected: Spectre-v4
Feb  9 18:42:34.716428 kernel: CPU features: detected: Spectre-BHB
Feb  9 18:42:34.716435 kernel: CPU features: kernel page table isolation forced ON by KASLR
Feb  9 18:42:34.716441 kernel: CPU features: detected: Kernel page table isolation (KPTI)
Feb  9 18:42:34.716447 kernel: CPU features: detected: ARM erratum 1418040
Feb  9 18:42:34.716453 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 633024
Feb  9 18:42:34.716459 kernel: Policy zone: DMA
Feb  9 18:42:34.716466 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=680ffc8c0dfb23738bd19ec96ea37b5bbadfb5cebf23767d1d52c89a6d5c00b4
Feb  9 18:42:34.716473 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Feb  9 18:42:34.716479 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Feb  9 18:42:34.716485 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Feb  9 18:42:34.716491 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb  9 18:42:34.716497 kernel: Memory: 2459144K/2572288K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 113144K reserved, 0K cma-reserved)
Feb  9 18:42:34.716505 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
Feb  9 18:42:34.716511 kernel: trace event string verifier disabled
Feb  9 18:42:34.716517 kernel: rcu: Preemptible hierarchical RCU implementation.
Feb  9 18:42:34.716523 kernel: rcu:         RCU event tracing is enabled.
Feb  9 18:42:34.716529 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4.
Feb  9 18:42:34.716536 kernel:         Trampoline variant of Tasks RCU enabled.
Feb  9 18:42:34.716542 kernel:         Tracing variant of Tasks RCU enabled.
Feb  9 18:42:34.716548 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb  9 18:42:34.716554 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
Feb  9 18:42:34.716560 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
Feb  9 18:42:34.716566 kernel: GICv3: 256 SPIs implemented
Feb  9 18:42:34.716573 kernel: GICv3: 0 Extended SPIs implemented
Feb  9 18:42:34.716580 kernel: GICv3: Distributor has no Range Selector support
Feb  9 18:42:34.716586 kernel: Root IRQ handler: gic_handle_irq
Feb  9 18:42:34.716591 kernel: GICv3: 16 PPIs implemented
Feb  9 18:42:34.716597 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000
Feb  9 18:42:34.716603 kernel: ACPI: SRAT not present
Feb  9 18:42:34.716609 kernel: ITS [mem 0x08080000-0x0809ffff]
Feb  9 18:42:34.716616 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1)
Feb  9 18:42:34.716622 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1)
Feb  9 18:42:34.716628 kernel: GICv3: using LPI property table @0x00000000400d0000
Feb  9 18:42:34.716634 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000
Feb  9 18:42:34.716640 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Feb  9 18:42:34.716647 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt).
Feb  9 18:42:34.716653 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns
Feb  9 18:42:34.716660 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns
Feb  9 18:42:34.716666 kernel: arm-pv: using stolen time PV
Feb  9 18:42:34.716672 kernel: Console: colour dummy device 80x25
Feb  9 18:42:34.716678 kernel: ACPI: Core revision 20210730
Feb  9 18:42:34.716684 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000)
Feb  9 18:42:34.716691 kernel: pid_max: default: 32768 minimum: 301
Feb  9 18:42:34.716697 kernel: LSM: Security Framework initializing
Feb  9 18:42:34.716703 kernel: SELinux:  Initializing.
Feb  9 18:42:34.716710 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Feb  9 18:42:34.716717 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Feb  9 18:42:34.716723 kernel: rcu: Hierarchical SRCU implementation.
Feb  9 18:42:34.716730 kernel: Platform MSI: ITS@0x8080000 domain created
Feb  9 18:42:34.716736 kernel: PCI/MSI: ITS@0x8080000 domain created
Feb  9 18:42:34.716742 kernel: Remapping and enabling EFI services.
Feb  9 18:42:34.716748 kernel: smp: Bringing up secondary CPUs ...
Feb  9 18:42:34.716754 kernel: Detected PIPT I-cache on CPU1
Feb  9 18:42:34.716761 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000
Feb  9 18:42:34.716768 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000
Feb  9 18:42:34.716775 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Feb  9 18:42:34.716804 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1]
Feb  9 18:42:34.716811 kernel: Detected PIPT I-cache on CPU2
Feb  9 18:42:34.716817 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000
Feb  9 18:42:34.716824 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000
Feb  9 18:42:34.716830 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Feb  9 18:42:34.716836 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1]
Feb  9 18:42:34.716842 kernel: Detected PIPT I-cache on CPU3
Feb  9 18:42:34.716848 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000
Feb  9 18:42:34.716857 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000
Feb  9 18:42:34.716863 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Feb  9 18:42:34.716869 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1]
Feb  9 18:42:34.716875 kernel: smp: Brought up 1 node, 4 CPUs
Feb  9 18:42:34.716886 kernel: SMP: Total of 4 processors activated.
Feb  9 18:42:34.716893 kernel: CPU features: detected: 32-bit EL0 Support
Feb  9 18:42:34.716900 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence
Feb  9 18:42:34.716907 kernel: CPU features: detected: Common not Private translations
Feb  9 18:42:34.716913 kernel: CPU features: detected: CRC32 instructions
Feb  9 18:42:34.716920 kernel: CPU features: detected: RCpc load-acquire (LDAPR)
Feb  9 18:42:34.716926 kernel: CPU features: detected: LSE atomic instructions
Feb  9 18:42:34.716933 kernel: CPU features: detected: Privileged Access Never
Feb  9 18:42:34.716941 kernel: CPU features: detected: RAS Extension Support
Feb  9 18:42:34.716947 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS)
Feb  9 18:42:34.716954 kernel: CPU: All CPU(s) started at EL1
Feb  9 18:42:34.716960 kernel: alternatives: patching kernel code
Feb  9 18:42:34.716968 kernel: devtmpfs: initialized
Feb  9 18:42:34.716975 kernel: KASLR enabled
Feb  9 18:42:34.716981 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb  9 18:42:34.716988 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear)
Feb  9 18:42:34.716995 kernel: pinctrl core: initialized pinctrl subsystem
Feb  9 18:42:34.717001 kernel: SMBIOS 3.0.0 present.
Feb  9 18:42:34.717007 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
Feb  9 18:42:34.717014 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb  9 18:42:34.717021 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations
Feb  9 18:42:34.717027 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Feb  9 18:42:34.717035 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Feb  9 18:42:34.717042 kernel: audit: initializing netlink subsys (disabled)
Feb  9 18:42:34.717049 kernel: audit: type=2000 audit(0.045:1): state=initialized audit_enabled=0 res=1
Feb  9 18:42:34.717055 kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb  9 18:42:34.717062 kernel: cpuidle: using governor menu
Feb  9 18:42:34.717068 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.
Feb  9 18:42:34.717075 kernel: ASID allocator initialised with 32768 entries
Feb  9 18:42:34.717081 kernel: ACPI: bus type PCI registered
Feb  9 18:42:34.717088 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb  9 18:42:34.717095 kernel: Serial: AMBA PL011 UART driver
Feb  9 18:42:34.717102 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
Feb  9 18:42:34.717108 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages
Feb  9 18:42:34.717115 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
Feb  9 18:42:34.717122 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages
Feb  9 18:42:34.717128 kernel: cryptd: max_cpu_qlen set to 1000
Feb  9 18:42:34.717135 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng)
Feb  9 18:42:34.717141 kernel: ACPI: Added _OSI(Module Device)
Feb  9 18:42:34.717148 kernel: ACPI: Added _OSI(Processor Device)
Feb  9 18:42:34.717155 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Feb  9 18:42:34.717162 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb  9 18:42:34.717169 kernel: ACPI: Added _OSI(Linux-Dell-Video)
Feb  9 18:42:34.717175 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
Feb  9 18:42:34.717182 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
Feb  9 18:42:34.717189 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Feb  9 18:42:34.717195 kernel: ACPI: Interpreter enabled
Feb  9 18:42:34.717202 kernel: ACPI: Using GIC for interrupt routing
Feb  9 18:42:34.717208 kernel: ACPI: MCFG table detected, 1 entries
Feb  9 18:42:34.717216 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA
Feb  9 18:42:34.717222 kernel: printk: console [ttyAMA0] enabled
Feb  9 18:42:34.717229 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Feb  9 18:42:34.717348 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3]
Feb  9 18:42:34.717414 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR]
Feb  9 18:42:34.717474 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability]
Feb  9 18:42:34.717534 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00
Feb  9 18:42:34.717595 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff]
Feb  9 18:42:34.717604 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io  0x0000-0xffff window]
Feb  9 18:42:34.717611 kernel: PCI host bridge to bus 0000:00
Feb  9 18:42:34.717684 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window]
Feb  9 18:42:34.717742 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0xffff window]
Feb  9 18:42:34.717814 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window]
Feb  9 18:42:34.717868 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Feb  9 18:42:34.717944 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000
Feb  9 18:42:34.718267 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00
Feb  9 18:42:34.718339 kernel: pci 0000:00:01.0: reg 0x10: [io  0x0000-0x001f]
Feb  9 18:42:34.718400 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff]
Feb  9 18:42:34.718460 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref]
Feb  9 18:42:34.718520 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref]
Feb  9 18:42:34.718581 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff]
Feb  9 18:42:34.718645 kernel: pci 0000:00:01.0: BAR 0: assigned [io  0x1000-0x101f]
Feb  9 18:42:34.718699 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window]
Feb  9 18:42:34.718752 kernel: pci_bus 0000:00: resource 5 [io  0x0000-0xffff window]
Feb  9 18:42:34.718834 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window]
Feb  9 18:42:34.718844 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35
Feb  9 18:42:34.718851 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36
Feb  9 18:42:34.718858 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37
Feb  9 18:42:34.718866 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38
Feb  9 18:42:34.718873 kernel: iommu: Default domain type: Translated 
Feb  9 18:42:34.718880 kernel: iommu: DMA domain TLB invalidation policy: strict mode 
Feb  9 18:42:34.718886 kernel: vgaarb: loaded
Feb  9 18:42:34.718893 kernel: pps_core: LinuxPPS API ver. 1 registered
Feb  9 18:42:34.718899 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Feb  9 18:42:34.718906 kernel: PTP clock support registered
Feb  9 18:42:34.718913 kernel: Registered efivars operations
Feb  9 18:42:34.718919 kernel: clocksource: Switched to clocksource arch_sys_counter
Feb  9 18:42:34.718926 kernel: VFS: Disk quotas dquot_6.6.0
Feb  9 18:42:34.718933 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb  9 18:42:34.718940 kernel: pnp: PnP ACPI init
Feb  9 18:42:34.719007 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved
Feb  9 18:42:34.719017 kernel: pnp: PnP ACPI: found 1 devices
Feb  9 18:42:34.719023 kernel: NET: Registered PF_INET protocol family
Feb  9 18:42:34.719030 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear)
Feb  9 18:42:34.719037 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear)
Feb  9 18:42:34.719043 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb  9 18:42:34.719051 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear)
Feb  9 18:42:34.719058 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear)
Feb  9 18:42:34.719065 kernel: TCP: Hash tables configured (established 32768 bind 32768)
Feb  9 18:42:34.719071 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear)
Feb  9 18:42:34.719078 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear)
Feb  9 18:42:34.719084 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb  9 18:42:34.719091 kernel: PCI: CLS 0 bytes, default 64
Feb  9 18:42:34.719098 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available
Feb  9 18:42:34.719105 kernel: kvm [1]: HYP mode not available
Feb  9 18:42:34.719112 kernel: Initialise system trusted keyrings
Feb  9 18:42:34.719119 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0
Feb  9 18:42:34.719125 kernel: Key type asymmetric registered
Feb  9 18:42:34.719132 kernel: Asymmetric key parser 'x509' registered
Feb  9 18:42:34.719138 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249)
Feb  9 18:42:34.719145 kernel: io scheduler mq-deadline registered
Feb  9 18:42:34.719151 kernel: io scheduler kyber registered
Feb  9 18:42:34.719157 kernel: io scheduler bfq registered
Feb  9 18:42:34.719164 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0
Feb  9 18:42:34.719172 kernel: ACPI: button: Power Button [PWRB]
Feb  9 18:42:34.719179 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36
Feb  9 18:42:34.719241 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007)
Feb  9 18:42:34.719249 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb  9 18:42:34.719256 kernel: thunder_xcv, ver 1.0
Feb  9 18:42:34.719262 kernel: thunder_bgx, ver 1.0
Feb  9 18:42:34.719269 kernel: nicpf, ver 1.0
Feb  9 18:42:34.719275 kernel: nicvf, ver 1.0
Feb  9 18:42:34.719348 kernel: rtc-efi rtc-efi.0: registered as rtc0
Feb  9 18:42:34.719407 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T18:42:34 UTC (1707504154)
Feb  9 18:42:34.719416 kernel: hid: raw HID events driver (C) Jiri Kosina
Feb  9 18:42:34.719423 kernel: NET: Registered PF_INET6 protocol family
Feb  9 18:42:34.719429 kernel: Segment Routing with IPv6
Feb  9 18:42:34.719436 kernel: In-situ OAM (IOAM) with IPv6
Feb  9 18:42:34.719442 kernel: NET: Registered PF_PACKET protocol family
Feb  9 18:42:34.719449 kernel: Key type dns_resolver registered
Feb  9 18:42:34.719455 kernel: registered taskstats version 1
Feb  9 18:42:34.719463 kernel: Loading compiled-in X.509 certificates
Feb  9 18:42:34.719470 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 947a80114e81e2815f6db72a0d388260762488f9'
Feb  9 18:42:34.719476 kernel: Key type .fscrypt registered
Feb  9 18:42:34.719483 kernel: Key type fscrypt-provisioning registered
Feb  9 18:42:34.719489 kernel: ima: No TPM chip found, activating TPM-bypass!
Feb  9 18:42:34.719496 kernel: ima: Allocated hash algorithm: sha1
Feb  9 18:42:34.719502 kernel: ima: No architecture policies found
Feb  9 18:42:34.719509 kernel: Freeing unused kernel memory: 34688K
Feb  9 18:42:34.719515 kernel: Run /init as init process
Feb  9 18:42:34.719523 kernel:   with arguments:
Feb  9 18:42:34.719530 kernel:     /init
Feb  9 18:42:34.719536 kernel:   with environment:
Feb  9 18:42:34.719542 kernel:     HOME=/
Feb  9 18:42:34.719549 kernel:     TERM=linux
Feb  9 18:42:34.719555 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Feb  9 18:42:34.719563 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb  9 18:42:34.719572 systemd[1]: Detected virtualization kvm.
Feb  9 18:42:34.719580 systemd[1]: Detected architecture arm64.
Feb  9 18:42:34.719587 systemd[1]: Running in initrd.
Feb  9 18:42:34.719594 systemd[1]: No hostname configured, using default hostname.
Feb  9 18:42:34.719601 systemd[1]: Hostname set to <localhost>.
Feb  9 18:42:34.719608 systemd[1]: Initializing machine ID from VM UUID.
Feb  9 18:42:34.719615 systemd[1]: Queued start job for default target initrd.target.
Feb  9 18:42:34.719622 systemd[1]: Started systemd-ask-password-console.path.
Feb  9 18:42:34.719629 systemd[1]: Reached target cryptsetup.target.
Feb  9 18:42:34.719690 systemd[1]: Reached target paths.target.
Feb  9 18:42:34.719697 systemd[1]: Reached target slices.target.
Feb  9 18:42:34.719704 systemd[1]: Reached target swap.target.
Feb  9 18:42:34.719711 systemd[1]: Reached target timers.target.
Feb  9 18:42:34.719718 systemd[1]: Listening on iscsid.socket.
Feb  9 18:42:34.719725 systemd[1]: Listening on iscsiuio.socket.
Feb  9 18:42:34.719732 systemd[1]: Listening on systemd-journald-audit.socket.
Feb  9 18:42:34.719741 systemd[1]: Listening on systemd-journald-dev-log.socket.
Feb  9 18:42:34.719748 systemd[1]: Listening on systemd-journald.socket.
Feb  9 18:42:34.719755 systemd[1]: Listening on systemd-networkd.socket.
Feb  9 18:42:34.719761 systemd[1]: Listening on systemd-udevd-control.socket.
Feb  9 18:42:34.719768 systemd[1]: Listening on systemd-udevd-kernel.socket.
Feb  9 18:42:34.719775 systemd[1]: Reached target sockets.target.
Feb  9 18:42:34.719815 systemd[1]: Starting kmod-static-nodes.service...
Feb  9 18:42:34.719824 systemd[1]: Finished network-cleanup.service.
Feb  9 18:42:34.719831 systemd[1]: Starting systemd-fsck-usr.service...
Feb  9 18:42:34.719841 systemd[1]: Starting systemd-journald.service...
Feb  9 18:42:34.719848 systemd[1]: Starting systemd-modules-load.service...
Feb  9 18:42:34.719855 systemd[1]: Starting systemd-resolved.service...
Feb  9 18:42:34.719862 systemd[1]: Starting systemd-vconsole-setup.service...
Feb  9 18:42:34.719869 systemd[1]: Finished kmod-static-nodes.service.
Feb  9 18:42:34.719876 systemd[1]: Finished systemd-fsck-usr.service.
Feb  9 18:42:34.719883 systemd[1]: Starting systemd-tmpfiles-setup-dev.service...
Feb  9 18:42:34.719890 systemd[1]: Finished systemd-vconsole-setup.service.
Feb  9 18:42:34.719897 systemd[1]: Finished systemd-tmpfiles-setup-dev.service.
Feb  9 18:42:34.719906 kernel: audit: type=1130 audit(1707504154.717:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:34.719913 systemd[1]: Starting dracut-cmdline-ask.service...
Feb  9 18:42:34.719923 systemd-journald[289]: Journal started
Feb  9 18:42:34.719967 systemd-journald[289]: Runtime Journal (/run/log/journal/48798b8f6a324348b55084c8cc7d3be9) is 6.0M, max 48.7M, 42.6M free.
Feb  9 18:42:34.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:34.712064 systemd-modules-load[290]: Inserted module 'overlay'
Feb  9 18:42:34.721573 systemd[1]: Started systemd-journald.service.
Feb  9 18:42:34.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:34.727657 kernel: audit: type=1130 audit(1707504154.722:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:34.727690 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb  9 18:42:34.728342 systemd-modules-load[290]: Inserted module 'br_netfilter'
Feb  9 18:42:34.729075 kernel: Bridge firewalling registered
Feb  9 18:42:34.736099 systemd[1]: Finished dracut-cmdline-ask.service.
Feb  9 18:42:34.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:34.738233 systemd[1]: Starting dracut-cmdline.service...
Feb  9 18:42:34.741542 kernel: audit: type=1130 audit(1707504154.736:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:34.741565 kernel: SCSI subsystem initialized
Feb  9 18:42:34.740639 systemd-resolved[291]: Positive Trust Anchors:
Feb  9 18:42:34.740646 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb  9 18:42:34.740673 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Feb  9 18:42:34.744714 systemd-resolved[291]: Defaulting to hostname 'linux'.
Feb  9 18:42:34.752358 kernel: audit: type=1130 audit(1707504154.748:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:34.752376 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb  9 18:42:34.752391 kernel: device-mapper: uevent: version 1.0.3
Feb  9 18:42:34.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:34.752428 dracut-cmdline[308]: dracut-dracut-053
Feb  9 18:42:34.754672 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com
Feb  9 18:42:34.745451 systemd[1]: Started systemd-resolved.service.
Feb  9 18:42:34.755411 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=680ffc8c0dfb23738bd19ec96ea37b5bbadfb5cebf23767d1d52c89a6d5c00b4
Feb  9 18:42:34.749050 systemd[1]: Reached target nss-lookup.target.
Feb  9 18:42:34.760355 systemd-modules-load[290]: Inserted module 'dm_multipath'
Feb  9 18:42:34.761035 systemd[1]: Finished systemd-modules-load.service.
Feb  9 18:42:34.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:34.762288 systemd[1]: Starting systemd-sysctl.service...
Feb  9 18:42:34.765104 kernel: audit: type=1130 audit(1707504154.761:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:34.771019 systemd[1]: Finished systemd-sysctl.service.
Feb  9 18:42:34.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:34.774823 kernel: audit: type=1130 audit(1707504154.770:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:34.818815 kernel: Loading iSCSI transport class v2.0-870.
Feb  9 18:42:34.826810 kernel: iscsi: registered transport (tcp)
Feb  9 18:42:34.839965 kernel: iscsi: registered transport (qla4xxx)
Feb  9 18:42:34.840005 kernel: QLogic iSCSI HBA Driver
Feb  9 18:42:34.873511 systemd[1]: Finished dracut-cmdline.service.
Feb  9 18:42:34.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:34.874980 systemd[1]: Starting dracut-pre-udev.service...
Feb  9 18:42:34.877259 kernel: audit: type=1130 audit(1707504154.873:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:34.918809 kernel: raid6: neonx8   gen() 13806 MB/s
Feb  9 18:42:34.935801 kernel: raid6: neonx8   xor() 10821 MB/s
Feb  9 18:42:34.952813 kernel: raid6: neonx4   gen() 13567 MB/s
Feb  9 18:42:34.969802 kernel: raid6: neonx4   xor() 11219 MB/s
Feb  9 18:42:34.986800 kernel: raid6: neonx2   gen() 12981 MB/s
Feb  9 18:42:35.003801 kernel: raid6: neonx2   xor() 10254 MB/s
Feb  9 18:42:35.020806 kernel: raid6: neonx1   gen() 10504 MB/s
Feb  9 18:42:35.037805 kernel: raid6: neonx1   xor()  8791 MB/s
Feb  9 18:42:35.054811 kernel: raid6: int64x8  gen()  6292 MB/s
Feb  9 18:42:35.071804 kernel: raid6: int64x8  xor()  3543 MB/s
Feb  9 18:42:35.088809 kernel: raid6: int64x4  gen()  7226 MB/s
Feb  9 18:42:35.105806 kernel: raid6: int64x4  xor()  3854 MB/s
Feb  9 18:42:35.122801 kernel: raid6: int64x2  gen()  6152 MB/s
Feb  9 18:42:35.139800 kernel: raid6: int64x2  xor()  3322 MB/s
Feb  9 18:42:35.156806 kernel: raid6: int64x1  gen()  5046 MB/s
Feb  9 18:42:35.174000 kernel: raid6: int64x1  xor()  2645 MB/s
Feb  9 18:42:35.174021 kernel: raid6: using algorithm neonx8 gen() 13806 MB/s
Feb  9 18:42:35.174038 kernel: raid6: .... xor() 10821 MB/s, rmw enabled
Feb  9 18:42:35.174054 kernel: raid6: using neon recovery algorithm
Feb  9 18:42:35.184940 kernel: xor: measuring software checksum speed
Feb  9 18:42:35.184964 kernel:    8regs           : 17297 MB/sec
Feb  9 18:42:35.185800 kernel:    32regs          : 20760 MB/sec
Feb  9 18:42:35.186925 kernel:    arm64_neon      : 27911 MB/sec
Feb  9 18:42:35.186948 kernel: xor: using function: arm64_neon (27911 MB/sec)
Feb  9 18:42:35.241811 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no
Feb  9 18:42:35.251963 systemd[1]: Finished dracut-pre-udev.service.
Feb  9 18:42:35.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:35.254000 audit: BPF prog-id=7 op=LOAD
Feb  9 18:42:35.254000 audit: BPF prog-id=8 op=LOAD
Feb  9 18:42:35.255800 kernel: audit: type=1130 audit(1707504155.251:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:35.255819 kernel: audit: type=1334 audit(1707504155.254:10): prog-id=7 op=LOAD
Feb  9 18:42:35.255933 systemd[1]: Starting systemd-udevd.service...
Feb  9 18:42:35.269199 systemd-udevd[490]: Using default interface naming scheme 'v252'.
Feb  9 18:42:35.272588 systemd[1]: Started systemd-udevd.service.
Feb  9 18:42:35.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:35.276355 systemd[1]: Starting dracut-pre-trigger.service...
Feb  9 18:42:35.287424 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation
Feb  9 18:42:35.312315 systemd[1]: Finished dracut-pre-trigger.service.
Feb  9 18:42:35.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:35.313691 systemd[1]: Starting systemd-udev-trigger.service...
Feb  9 18:42:35.346083 systemd[1]: Finished systemd-udev-trigger.service.
Feb  9 18:42:35.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:35.382852 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB)
Feb  9 18:42:35.384957 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Feb  9 18:42:35.384991 kernel: GPT:9289727 != 19775487
Feb  9 18:42:35.385001 kernel: GPT:Alternate GPT header not at the end of the disk.
Feb  9 18:42:35.386161 kernel: GPT:9289727 != 19775487
Feb  9 18:42:35.386186 kernel: GPT: Use GNU Parted to correct GPT errors.
Feb  9 18:42:35.386195 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Feb  9 18:42:35.396819 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (554)
Feb  9 18:42:35.401322 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device.
Feb  9 18:42:35.404288 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device.
Feb  9 18:42:35.405139 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device.
Feb  9 18:42:35.410681 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device.
Feb  9 18:42:35.413810 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device.
Feb  9 18:42:35.415447 systemd[1]: Starting disk-uuid.service...
Feb  9 18:42:35.421167 disk-uuid[562]: Primary Header is updated.
Feb  9 18:42:35.421167 disk-uuid[562]: Secondary Entries is updated.
Feb  9 18:42:35.421167 disk-uuid[562]: Secondary Header is updated.
Feb  9 18:42:35.424809 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Feb  9 18:42:35.435812 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Feb  9 18:42:36.434837 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Feb  9 18:42:36.434887 disk-uuid[563]: The operation has completed successfully.
Feb  9 18:42:36.455507 systemd[1]: disk-uuid.service: Deactivated successfully.
Feb  9 18:42:36.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:36.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:36.455597 systemd[1]: Finished disk-uuid.service.
Feb  9 18:42:36.459555 systemd[1]: Starting verity-setup.service...
Feb  9 18:42:36.472814 kernel: device-mapper: verity: sha256 using implementation "sha256-ce"
Feb  9 18:42:36.492601 systemd[1]: Found device dev-mapper-usr.device.
Feb  9 18:42:36.494546 systemd[1]: Mounting sysusr-usr.mount...
Feb  9 18:42:36.496526 systemd[1]: Finished verity-setup.service.
Feb  9 18:42:36.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:36.544806 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none.
Feb  9 18:42:36.545313 systemd[1]: Mounted sysusr-usr.mount.
Feb  9 18:42:36.546113 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met.
Feb  9 18:42:36.546713 systemd[1]: Starting ignition-setup.service...
Feb  9 18:42:36.548854 systemd[1]: Starting parse-ip-for-networkd.service...
Feb  9 18:42:36.554241 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm
Feb  9 18:42:36.554271 kernel: BTRFS info (device vda6): using free space tree
Feb  9 18:42:36.554281 kernel: BTRFS info (device vda6): has skinny extents
Feb  9 18:42:36.561980 systemd[1]: mnt-oem.mount: Deactivated successfully.
Feb  9 18:42:36.568236 systemd[1]: Finished ignition-setup.service.
Feb  9 18:42:36.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:36.569689 systemd[1]: Starting ignition-fetch-offline.service...
Feb  9 18:42:36.629821 systemd[1]: Finished parse-ip-for-networkd.service.
Feb  9 18:42:36.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:36.630000 audit: BPF prog-id=9 op=LOAD
Feb  9 18:42:36.631924 systemd[1]: Starting systemd-networkd.service...
Feb  9 18:42:36.645090 ignition[646]: Ignition 2.14.0
Feb  9 18:42:36.645100 ignition[646]: Stage: fetch-offline
Feb  9 18:42:36.645137 ignition[646]: no configs at "/usr/lib/ignition/base.d"
Feb  9 18:42:36.645147 ignition[646]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb  9 18:42:36.645274 ignition[646]: parsed url from cmdline: ""
Feb  9 18:42:36.645277 ignition[646]: no config URL provided
Feb  9 18:42:36.645281 ignition[646]: reading system config file "/usr/lib/ignition/user.ign"
Feb  9 18:42:36.645288 ignition[646]: no config at "/usr/lib/ignition/user.ign"
Feb  9 18:42:36.645306 ignition[646]: op(1): [started]  loading QEMU firmware config module
Feb  9 18:42:36.645311 ignition[646]: op(1): executing: "modprobe" "qemu_fw_cfg"
Feb  9 18:42:36.649373 ignition[646]: op(1): [finished] loading QEMU firmware config module
Feb  9 18:42:36.653483 systemd-networkd[740]: lo: Link UP
Feb  9 18:42:36.653497 systemd-networkd[740]: lo: Gained carrier
Feb  9 18:42:36.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:36.654071 systemd-networkd[740]: Enumeration completed
Feb  9 18:42:36.654163 systemd[1]: Started systemd-networkd.service.
Feb  9 18:42:36.654415 systemd-networkd[740]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb  9 18:42:36.654974 systemd[1]: Reached target network.target.
Feb  9 18:42:36.655865 systemd-networkd[740]: eth0: Link UP
Feb  9 18:42:36.655869 systemd-networkd[740]: eth0: Gained carrier
Feb  9 18:42:36.656848 systemd[1]: Starting iscsiuio.service...
Feb  9 18:42:36.665366 systemd[1]: Started iscsiuio.service.
Feb  9 18:42:36.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:36.666714 systemd[1]: Starting iscsid.service...
Feb  9 18:42:36.670304 iscsid[747]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi
Feb  9 18:42:36.670304 iscsid[747]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.<reversed domain name>[:identifier].
Feb  9 18:42:36.670304 iscsid[747]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6.
Feb  9 18:42:36.670304 iscsid[747]: If using hardware iscsi like qla4xxx this message can be ignored.
Feb  9 18:42:36.670304 iscsid[747]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi
Feb  9 18:42:36.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:36.679744 iscsid[747]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf
Feb  9 18:42:36.673158 systemd[1]: Started iscsid.service.
Feb  9 18:42:36.677111 systemd[1]: Starting dracut-initqueue.service...
Feb  9 18:42:36.680859 systemd-networkd[740]: eth0: DHCPv4 address 10.0.0.123/16, gateway 10.0.0.1 acquired from 10.0.0.1
Feb  9 18:42:36.687070 systemd[1]: Finished dracut-initqueue.service.
Feb  9 18:42:36.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:36.688022 systemd[1]: Reached target remote-fs-pre.target.
Feb  9 18:42:36.689199 systemd[1]: Reached target remote-cryptsetup.target.
Feb  9 18:42:36.689253 ignition[646]: parsing config with SHA512: fec03b46d2421dec8675ebd7304504a03c3302921d549745b4fdad92085c37e9559a22934c05ebeb00c63d99c6a5babd1552459963fd31c3ae5cf8ffcc93c65f
Feb  9 18:42:36.690395 systemd[1]: Reached target remote-fs.target.
Feb  9 18:42:36.692251 systemd[1]: Starting dracut-pre-mount.service...
Feb  9 18:42:36.699559 systemd[1]: Finished dracut-pre-mount.service.
Feb  9 18:42:36.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:36.715915 unknown[646]: fetched base config from "system"
Feb  9 18:42:36.715926 unknown[646]: fetched user config from "qemu"
Feb  9 18:42:36.716422 ignition[646]: fetch-offline: fetch-offline passed
Feb  9 18:42:36.716486 ignition[646]: Ignition finished successfully
Feb  9 18:42:36.717668 systemd[1]: Finished ignition-fetch-offline.service.
Feb  9 18:42:36.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:36.718818 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json).
Feb  9 18:42:36.719432 systemd[1]: Starting ignition-kargs.service...
Feb  9 18:42:36.728203 ignition[761]: Ignition 2.14.0
Feb  9 18:42:36.728213 ignition[761]: Stage: kargs
Feb  9 18:42:36.728299 ignition[761]: no configs at "/usr/lib/ignition/base.d"
Feb  9 18:42:36.728309 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb  9 18:42:36.729208 ignition[761]: kargs: kargs passed
Feb  9 18:42:36.729252 ignition[761]: Ignition finished successfully
Feb  9 18:42:36.732535 systemd[1]: Finished ignition-kargs.service.
Feb  9 18:42:36.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:36.734008 systemd[1]: Starting ignition-disks.service...
Feb  9 18:42:36.739911 ignition[767]: Ignition 2.14.0
Feb  9 18:42:36.739925 ignition[767]: Stage: disks
Feb  9 18:42:36.740011 ignition[767]: no configs at "/usr/lib/ignition/base.d"
Feb  9 18:42:36.740021 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb  9 18:42:36.741627 systemd[1]: Finished ignition-disks.service.
Feb  9 18:42:36.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:36.740818 ignition[767]: disks: disks passed
Feb  9 18:42:36.742998 systemd[1]: Reached target initrd-root-device.target.
Feb  9 18:42:36.740862 ignition[767]: Ignition finished successfully
Feb  9 18:42:36.744030 systemd[1]: Reached target local-fs-pre.target.
Feb  9 18:42:36.745007 systemd[1]: Reached target local-fs.target.
Feb  9 18:42:36.746062 systemd[1]: Reached target sysinit.target.
Feb  9 18:42:36.747126 systemd[1]: Reached target basic.target.
Feb  9 18:42:36.748878 systemd[1]: Starting systemd-fsck-root.service...
Feb  9 18:42:36.757638 systemd-resolved[291]: Detected conflict on linux IN A 10.0.0.123
Feb  9 18:42:36.757652 systemd-resolved[291]: Hostname conflict, changing published hostname from 'linux' to 'linux7'.
Feb  9 18:42:36.760029 systemd-fsck[775]: ROOT: clean, 602/553520 files, 56013/553472 blocks
Feb  9 18:42:36.762685 systemd[1]: Finished systemd-fsck-root.service.
Feb  9 18:42:36.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:36.764368 systemd[1]: Mounting sysroot.mount...
Feb  9 18:42:36.770806 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.
Feb  9 18:42:36.770938 systemd[1]: Mounted sysroot.mount.
Feb  9 18:42:36.771628 systemd[1]: Reached target initrd-root-fs.target.
Feb  9 18:42:36.773571 systemd[1]: Mounting sysroot-usr.mount...
Feb  9 18:42:36.774402 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met.
Feb  9 18:42:36.774440 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Feb  9 18:42:36.774463 systemd[1]: Reached target ignition-diskful.target.
Feb  9 18:42:36.776191 systemd[1]: Mounted sysroot-usr.mount.
Feb  9 18:42:36.777673 systemd[1]: Starting initrd-setup-root.service...
Feb  9 18:42:36.781833 initrd-setup-root[785]: cut: /sysroot/etc/passwd: No such file or directory
Feb  9 18:42:36.786262 initrd-setup-root[793]: cut: /sysroot/etc/group: No such file or directory
Feb  9 18:42:36.790119 initrd-setup-root[801]: cut: /sysroot/etc/shadow: No such file or directory
Feb  9 18:42:36.793718 initrd-setup-root[809]: cut: /sysroot/etc/gshadow: No such file or directory
Feb  9 18:42:36.818136 systemd[1]: Finished initrd-setup-root.service.
Feb  9 18:42:36.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:36.819393 systemd[1]: Starting ignition-mount.service...
Feb  9 18:42:36.820568 systemd[1]: Starting sysroot-boot.service...
Feb  9 18:42:36.824417 bash[826]: umount: /sysroot/usr/share/oem: not mounted.
Feb  9 18:42:36.831647 ignition[828]: INFO     : Ignition 2.14.0
Feb  9 18:42:36.831647 ignition[828]: INFO     : Stage: mount
Feb  9 18:42:36.833481 ignition[828]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb  9 18:42:36.833481 ignition[828]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb  9 18:42:36.833481 ignition[828]: INFO     : mount: mount passed
Feb  9 18:42:36.833481 ignition[828]: INFO     : Ignition finished successfully
Feb  9 18:42:36.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:36.833989 systemd[1]: Finished ignition-mount.service.
Feb  9 18:42:36.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:36.841099 systemd[1]: Finished sysroot-boot.service.
Feb  9 18:42:37.504273 systemd[1]: Mounting sysroot-usr-share-oem.mount...
Feb  9 18:42:37.510835 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (837)
Feb  9 18:42:37.510870 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm
Feb  9 18:42:37.510880 kernel: BTRFS info (device vda6): using free space tree
Feb  9 18:42:37.511800 kernel: BTRFS info (device vda6): has skinny extents
Feb  9 18:42:37.514514 systemd[1]: Mounted sysroot-usr-share-oem.mount.
Feb  9 18:42:37.515878 systemd[1]: Starting ignition-files.service...
Feb  9 18:42:37.528938 ignition[857]: INFO     : Ignition 2.14.0
Feb  9 18:42:37.528938 ignition[857]: INFO     : Stage: files
Feb  9 18:42:37.530386 ignition[857]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb  9 18:42:37.530386 ignition[857]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb  9 18:42:37.530386 ignition[857]: DEBUG    : files: compiled without relabeling support, skipping
Feb  9 18:42:37.533272 ignition[857]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Feb  9 18:42:37.533272 ignition[857]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Feb  9 18:42:37.535479 ignition[857]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Feb  9 18:42:37.535479 ignition[857]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Feb  9 18:42:37.535479 ignition[857]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Feb  9 18:42:37.535330 unknown[857]: wrote ssh authorized keys file for user: core
Feb  9 18:42:37.539817 ignition[857]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/etc/flatcar-cgroupv1"
Feb  9 18:42:37.539817 ignition[857]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1"
Feb  9 18:42:37.539817 ignition[857]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz"
Feb  9 18:42:37.539817 ignition[857]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1
Feb  9 18:42:37.882264 ignition[857]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET result: OK
Feb  9 18:42:38.098176 ignition[857]: DEBUG    : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742
Feb  9 18:42:38.100216 ignition[857]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz"
Feb  9 18:42:38.100216 ignition[857]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz"
Feb  9 18:42:38.100216 ignition[857]: INFO     : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1
Feb  9 18:42:38.321180 ignition[857]: INFO     : files: createFilesystemsFiles: createFiles: op(5): GET result: OK
Feb  9 18:42:38.439212 ignition[857]: DEBUG    : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c
Feb  9 18:42:38.441457 ignition[857]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz"
Feb  9 18:42:38.441457 ignition[857]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/opt/bin/kubeadm"
Feb  9 18:42:38.441457 ignition[857]: INFO     : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1
Feb  9 18:42:38.441883 systemd-networkd[740]: eth0: Gained IPv6LL
Feb  9 18:42:38.487141 ignition[857]: INFO     : files: createFilesystemsFiles: createFiles: op(6): GET result: OK
Feb  9 18:42:38.739393 ignition[857]: DEBUG    : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db
Feb  9 18:42:38.739393 ignition[857]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm"
Feb  9 18:42:38.742707 ignition[857]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/opt/bin/kubelet"
Feb  9 18:42:38.742707 ignition[857]: INFO     : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1
Feb  9 18:42:38.761198 ignition[857]: INFO     : files: createFilesystemsFiles: createFiles: op(7): GET result: OK
Feb  9 18:42:39.438070 ignition[857]: DEBUG    : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d
Feb  9 18:42:39.440313 ignition[857]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet"
Feb  9 18:42:39.440313 ignition[857]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/home/core/install.sh"
Feb  9 18:42:39.440313 ignition[857]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/install.sh"
Feb  9 18:42:39.440313 ignition[857]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing file "/sysroot/etc/docker/daemon.json"
Feb  9 18:42:39.440313 ignition[857]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json"
Feb  9 18:42:39.440313 ignition[857]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Feb  9 18:42:39.440313 ignition[857]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Feb  9 18:42:39.440313 ignition[857]: INFO     : files: op(b): [started]  processing unit "prepare-cni-plugins.service"
Feb  9 18:42:39.440313 ignition[857]: INFO     : files: op(b): op(c): [started]  writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service"
Feb  9 18:42:39.440313 ignition[857]: INFO     : files: op(b): op(c): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service"
Feb  9 18:42:39.440313 ignition[857]: INFO     : files: op(b): [finished] processing unit "prepare-cni-plugins.service"
Feb  9 18:42:39.440313 ignition[857]: INFO     : files: op(d): [started]  processing unit "prepare-critools.service"
Feb  9 18:42:39.440313 ignition[857]: INFO     : files: op(d): op(e): [started]  writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service"
Feb  9 18:42:39.440313 ignition[857]: INFO     : files: op(d): op(e): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service"
Feb  9 18:42:39.440313 ignition[857]: INFO     : files: op(d): [finished] processing unit "prepare-critools.service"
Feb  9 18:42:39.440313 ignition[857]: INFO     : files: op(f): [started]  processing unit "coreos-metadata.service"
Feb  9 18:42:39.440313 ignition[857]: INFO     : files: op(f): op(10): [started]  writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Feb  9 18:42:39.462969 ignition[857]: INFO     : files: op(f): op(10): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Feb  9 18:42:39.462969 ignition[857]: INFO     : files: op(f): [finished] processing unit "coreos-metadata.service"
Feb  9 18:42:39.462969 ignition[857]: INFO     : files: op(11): [started]  processing unit "containerd.service"
Feb  9 18:42:39.462969 ignition[857]: INFO     : files: op(11): op(12): [started]  writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf"
Feb  9 18:42:39.462969 ignition[857]: INFO     : files: op(11): op(12): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf"
Feb  9 18:42:39.462969 ignition[857]: INFO     : files: op(11): [finished] processing unit "containerd.service"
Feb  9 18:42:39.462969 ignition[857]: INFO     : files: op(13): [started]  setting preset to enabled for "prepare-critools.service"
Feb  9 18:42:39.462969 ignition[857]: INFO     : files: op(13): [finished] setting preset to enabled for "prepare-critools.service"
Feb  9 18:42:39.462969 ignition[857]: INFO     : files: op(14): [started]  setting preset to disabled for "coreos-metadata.service"
Feb  9 18:42:39.462969 ignition[857]: INFO     : files: op(14): op(15): [started]  removing enablement symlink(s) for "coreos-metadata.service"
Feb  9 18:42:39.478712 kernel: kauditd_printk_skb: 22 callbacks suppressed
Feb  9 18:42:39.478733 kernel: audit: type=1130 audit(1707504159.471:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.478825 ignition[857]: INFO     : files: op(14): op(15): [finished] removing enablement symlink(s) for "coreos-metadata.service"
Feb  9 18:42:39.478825 ignition[857]: INFO     : files: op(14): [finished] setting preset to disabled for "coreos-metadata.service"
Feb  9 18:42:39.478825 ignition[857]: INFO     : files: op(16): [started]  setting preset to enabled for "prepare-cni-plugins.service"
Feb  9 18:42:39.478825 ignition[857]: INFO     : files: op(16): [finished] setting preset to enabled for "prepare-cni-plugins.service"
Feb  9 18:42:39.478825 ignition[857]: INFO     : files: createResultFile: createFiles: op(17): [started]  writing file "/sysroot/etc/.ignition-result.json"
Feb  9 18:42:39.478825 ignition[857]: INFO     : files: createResultFile: createFiles: op(17): [finished] writing file "/sysroot/etc/.ignition-result.json"
Feb  9 18:42:39.478825 ignition[857]: INFO     : files: files passed
Feb  9 18:42:39.478825 ignition[857]: INFO     : Ignition finished successfully
Feb  9 18:42:39.493682 kernel: audit: type=1130 audit(1707504159.480:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.493702 kernel: audit: type=1131 audit(1707504159.480:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.493712 kernel: audit: type=1130 audit(1707504159.485:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.471687 systemd[1]: Finished ignition-files.service.
Feb  9 18:42:39.473234 systemd[1]: Starting initrd-setup-root-after-ignition.service...
Feb  9 18:42:39.495300 initrd-setup-root-after-ignition[883]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory
Feb  9 18:42:39.476329 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile).
Feb  9 18:42:39.497706 initrd-setup-root-after-ignition[885]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb  9 18:42:39.476996 systemd[1]: Starting ignition-quench.service...
Feb  9 18:42:39.480146 systemd[1]: ignition-quench.service: Deactivated successfully.
Feb  9 18:42:39.480225 systemd[1]: Finished ignition-quench.service.
Feb  9 18:42:39.484565 systemd[1]: Finished initrd-setup-root-after-ignition.service.
Feb  9 18:42:39.486680 systemd[1]: Reached target ignition-complete.target.
Feb  9 18:42:39.490583 systemd[1]: Starting initrd-parse-etc.service...
Feb  9 18:42:39.503018 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb  9 18:42:39.503109 systemd[1]: Finished initrd-parse-etc.service.
Feb  9 18:42:39.508328 kernel: audit: type=1130 audit(1707504159.504:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.508350 kernel: audit: type=1131 audit(1707504159.504:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.504495 systemd[1]: Reached target initrd-fs.target.
Feb  9 18:42:39.509002 systemd[1]: Reached target initrd.target.
Feb  9 18:42:39.510110 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met.
Feb  9 18:42:39.510853 systemd[1]: Starting dracut-pre-pivot.service...
Feb  9 18:42:39.520941 systemd[1]: Finished dracut-pre-pivot.service.
Feb  9 18:42:39.523812 kernel: audit: type=1130 audit(1707504159.520:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.522449 systemd[1]: Starting initrd-cleanup.service...
Feb  9 18:42:39.530536 systemd[1]: Stopped target nss-lookup.target.
Feb  9 18:42:39.531224 systemd[1]: Stopped target remote-cryptsetup.target.
Feb  9 18:42:39.532230 systemd[1]: Stopped target timers.target.
Feb  9 18:42:39.533202 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb  9 18:42:39.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.533311 systemd[1]: Stopped dracut-pre-pivot.service.
Feb  9 18:42:39.537374 kernel: audit: type=1131 audit(1707504159.534:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.534324 systemd[1]: Stopped target initrd.target.
Feb  9 18:42:39.537024 systemd[1]: Stopped target basic.target.
Feb  9 18:42:39.537922 systemd[1]: Stopped target ignition-complete.target.
Feb  9 18:42:39.538875 systemd[1]: Stopped target ignition-diskful.target.
Feb  9 18:42:39.539893 systemd[1]: Stopped target initrd-root-device.target.
Feb  9 18:42:39.540934 systemd[1]: Stopped target remote-fs.target.
Feb  9 18:42:39.541890 systemd[1]: Stopped target remote-fs-pre.target.
Feb  9 18:42:39.542892 systemd[1]: Stopped target sysinit.target.
Feb  9 18:42:39.543817 systemd[1]: Stopped target local-fs.target.
Feb  9 18:42:39.544739 systemd[1]: Stopped target local-fs-pre.target.
Feb  9 18:42:39.545678 systemd[1]: Stopped target swap.target.
Feb  9 18:42:39.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.546546 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb  9 18:42:39.550635 kernel: audit: type=1131 audit(1707504159.546:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.546654 systemd[1]: Stopped dracut-pre-mount.service.
Feb  9 18:42:39.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.547613 systemd[1]: Stopped target cryptsetup.target.
Feb  9 18:42:39.554395 kernel: audit: type=1131 audit(1707504159.551:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.550109 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb  9 18:42:39.550207 systemd[1]: Stopped dracut-initqueue.service.
Feb  9 18:42:39.551297 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Feb  9 18:42:39.551389 systemd[1]: Stopped ignition-fetch-offline.service.
Feb  9 18:42:39.554063 systemd[1]: Stopped target paths.target.
Feb  9 18:42:39.554950 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb  9 18:42:39.559815 systemd[1]: Stopped systemd-ask-password-console.path.
Feb  9 18:42:39.560496 systemd[1]: Stopped target slices.target.
Feb  9 18:42:39.561486 systemd[1]: Stopped target sockets.target.
Feb  9 18:42:39.562375 systemd[1]: iscsid.socket: Deactivated successfully.
Feb  9 18:42:39.562443 systemd[1]: Closed iscsid.socket.
Feb  9 18:42:39.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.563273 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Feb  9 18:42:39.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.563371 systemd[1]: Stopped initrd-setup-root-after-ignition.service.
Feb  9 18:42:39.564426 systemd[1]: ignition-files.service: Deactivated successfully.
Feb  9 18:42:39.564520 systemd[1]: Stopped ignition-files.service.
Feb  9 18:42:39.566206 systemd[1]: Stopping ignition-mount.service...
Feb  9 18:42:39.567290 systemd[1]: Stopping iscsiuio.service...
Feb  9 18:42:39.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.569920 systemd[1]: Stopping sysroot-boot.service...
Feb  9 18:42:39.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.572842 ignition[898]: INFO     : Ignition 2.14.0
Feb  9 18:42:39.572842 ignition[898]: INFO     : Stage: umount
Feb  9 18:42:39.570428 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb  9 18:42:39.574597 ignition[898]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb  9 18:42:39.574597 ignition[898]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb  9 18:42:39.574597 ignition[898]: INFO     : umount: umount passed
Feb  9 18:42:39.574597 ignition[898]: INFO     : Ignition finished successfully
Feb  9 18:42:39.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.570557 systemd[1]: Stopped systemd-udev-trigger.service.
Feb  9 18:42:39.571529 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb  9 18:42:39.571625 systemd[1]: Stopped dracut-pre-trigger.service.
Feb  9 18:42:39.573922 systemd[1]: iscsiuio.service: Deactivated successfully.
Feb  9 18:42:39.574019 systemd[1]: Stopped iscsiuio.service.
Feb  9 18:42:39.576167 systemd[1]: Stopped target network.target.
Feb  9 18:42:39.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.578169 systemd[1]: iscsiuio.socket: Deactivated successfully.
Feb  9 18:42:39.578202 systemd[1]: Closed iscsiuio.socket.
Feb  9 18:42:39.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.579162 systemd[1]: Stopping systemd-networkd.service...
Feb  9 18:42:39.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.580570 systemd[1]: Stopping systemd-resolved.service...
Feb  9 18:42:39.581938 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb  9 18:42:39.582019 systemd[1]: Finished initrd-cleanup.service.
Feb  9 18:42:39.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.582821 systemd-networkd[740]: eth0: DHCPv6 lease lost
Feb  9 18:42:39.589000 audit: BPF prog-id=9 op=UNLOAD
Feb  9 18:42:39.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.583913 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Feb  9 18:42:39.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.584298 systemd[1]: systemd-networkd.service: Deactivated successfully.
Feb  9 18:42:39.584384 systemd[1]: Stopped systemd-networkd.service.
Feb  9 18:42:39.585858 systemd[1]: ignition-mount.service: Deactivated successfully.
Feb  9 18:42:39.585938 systemd[1]: Stopped ignition-mount.service.
Feb  9 18:42:39.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.587619 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Feb  9 18:42:39.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.587652 systemd[1]: Closed systemd-networkd.socket.
Feb  9 18:42:39.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.588556 systemd[1]: ignition-disks.service: Deactivated successfully.
Feb  9 18:42:39.588604 systemd[1]: Stopped ignition-disks.service.
Feb  9 18:42:39.589954 systemd[1]: ignition-kargs.service: Deactivated successfully.
Feb  9 18:42:39.590000 systemd[1]: Stopped ignition-kargs.service.
Feb  9 18:42:39.591087 systemd[1]: ignition-setup.service: Deactivated successfully.
Feb  9 18:42:39.591128 systemd[1]: Stopped ignition-setup.service.
Feb  9 18:42:39.593116 systemd[1]: Stopping network-cleanup.service...
Feb  9 18:42:39.595859 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Feb  9 18:42:39.595915 systemd[1]: Stopped parse-ip-for-networkd.service.
Feb  9 18:42:39.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.597070 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb  9 18:42:39.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.597111 systemd[1]: Stopped systemd-sysctl.service.
Feb  9 18:42:39.598826 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb  9 18:42:39.612000 audit: BPF prog-id=6 op=UNLOAD
Feb  9 18:42:39.598870 systemd[1]: Stopped systemd-modules-load.service.
Feb  9 18:42:39.599739 systemd[1]: Stopping systemd-udevd.service...
Feb  9 18:42:39.606460 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Feb  9 18:42:39.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.607049 systemd[1]: systemd-resolved.service: Deactivated successfully.
Feb  9 18:42:39.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.608288 systemd[1]: Stopped systemd-resolved.service.
Feb  9 18:42:39.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.609425 systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb  9 18:42:39.609538 systemd[1]: Stopped systemd-udevd.service.
Feb  9 18:42:39.611844 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb  9 18:42:39.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.611889 systemd[1]: Closed systemd-udevd-control.socket.
Feb  9 18:42:39.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.615645 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb  9 18:42:39.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.615681 systemd[1]: Closed systemd-udevd-kernel.socket.
Feb  9 18:42:39.617187 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb  9 18:42:39.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.617235 systemd[1]: Stopped dracut-pre-udev.service.
Feb  9 18:42:39.618444 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb  9 18:42:39.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.618486 systemd[1]: Stopped dracut-cmdline.service.
Feb  9 18:42:39.620033 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb  9 18:42:39.620075 systemd[1]: Stopped dracut-cmdline-ask.service.
Feb  9 18:42:39.622710 systemd[1]: Starting initrd-udevadm-cleanup-db.service...
Feb  9 18:42:39.624094 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Feb  9 18:42:39.624168 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service.
Feb  9 18:42:39.626080 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb  9 18:42:39.626118 systemd[1]: Stopped kmod-static-nodes.service.
Feb  9 18:42:39.626867 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb  9 18:42:39.626910 systemd[1]: Stopped systemd-vconsole-setup.service.
Feb  9 18:42:39.628978 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Feb  9 18:42:39.629537 systemd[1]: network-cleanup.service: Deactivated successfully.
Feb  9 18:42:39.629639 systemd[1]: Stopped network-cleanup.service.
Feb  9 18:42:39.631299 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb  9 18:42:39.631395 systemd[1]: Finished initrd-udevadm-cleanup-db.service.
Feb  9 18:42:39.665249 systemd[1]: sysroot-boot.service: Deactivated successfully.
Feb  9 18:42:39.665348 systemd[1]: Stopped sysroot-boot.service.
Feb  9 18:42:39.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.666761 systemd[1]: Reached target initrd-switch-root.target.
Feb  9 18:42:39.667718 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Feb  9 18:42:39.667778 systemd[1]: Stopped initrd-setup-root.service.
Feb  9 18:42:39.669628 systemd[1]: Starting initrd-switch-root.service...
Feb  9 18:42:39.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:39.675834 systemd[1]: Switching root.
Feb  9 18:42:39.678000 audit: BPF prog-id=8 op=UNLOAD
Feb  9 18:42:39.678000 audit: BPF prog-id=7 op=UNLOAD
Feb  9 18:42:39.678000 audit: BPF prog-id=5 op=UNLOAD
Feb  9 18:42:39.678000 audit: BPF prog-id=4 op=UNLOAD
Feb  9 18:42:39.678000 audit: BPF prog-id=3 op=UNLOAD
Feb  9 18:42:39.689128 iscsid[747]: iscsid shutting down.
Feb  9 18:42:39.689624 systemd-journald[289]: Journal stopped
Feb  9 18:42:41.785218 systemd-journald[289]: Received SIGTERM from PID 1 (systemd).
Feb  9 18:42:41.785271 kernel: SELinux:  Class mctp_socket not defined in policy.
Feb  9 18:42:41.785288 kernel: SELinux:  Class anon_inode not defined in policy.
Feb  9 18:42:41.785299 kernel: SELinux: the above unknown classes and permissions will be allowed
Feb  9 18:42:41.785309 kernel: SELinux:  policy capability network_peer_controls=1
Feb  9 18:42:41.785321 kernel: SELinux:  policy capability open_perms=1
Feb  9 18:42:41.785331 kernel: SELinux:  policy capability extended_socket_class=1
Feb  9 18:42:41.785341 kernel: SELinux:  policy capability always_check_network=0
Feb  9 18:42:41.785351 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  9 18:42:41.785360 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  9 18:42:41.785370 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Feb  9 18:42:41.785382 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Feb  9 18:42:41.785393 systemd[1]: Successfully loaded SELinux policy in 35.113ms.
Feb  9 18:42:41.785406 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.114ms.
Feb  9 18:42:41.785419 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb  9 18:42:41.785430 systemd[1]: Detected virtualization kvm.
Feb  9 18:42:41.785440 systemd[1]: Detected architecture arm64.
Feb  9 18:42:41.785450 systemd[1]: Detected first boot.
Feb  9 18:42:41.785461 systemd[1]: Initializing machine ID from VM UUID.
Feb  9 18:42:41.785471 kernel: SELinux:  Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped).
Feb  9 18:42:41.785481 systemd[1]: Populated /etc with preset unit settings.
Feb  9 18:42:41.785494 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Feb  9 18:42:41.785506 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb  9 18:42:41.785517 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb  9 18:42:41.785528 systemd[1]: Queued start job for default target multi-user.target.
Feb  9 18:42:41.785539 systemd[1]: Unnecessary job was removed for dev-vda6.device.
Feb  9 18:42:41.785551 systemd[1]: Created slice system-addon\x2dconfig.slice.
Feb  9 18:42:41.785561 systemd[1]: Created slice system-addon\x2drun.slice.
Feb  9 18:42:41.785572 systemd[1]: Created slice system-getty.slice.
Feb  9 18:42:41.785583 systemd[1]: Created slice system-modprobe.slice.
Feb  9 18:42:41.785593 systemd[1]: Created slice system-serial\x2dgetty.slice.
Feb  9 18:42:41.785607 systemd[1]: Created slice system-system\x2dcloudinit.slice.
Feb  9 18:42:41.785618 systemd[1]: Created slice system-systemd\x2dfsck.slice.
Feb  9 18:42:41.785628 systemd[1]: Created slice user.slice.
Feb  9 18:42:41.785642 systemd[1]: Started systemd-ask-password-console.path.
Feb  9 18:42:41.785653 systemd[1]: Started systemd-ask-password-wall.path.
Feb  9 18:42:41.785663 systemd[1]: Set up automount boot.automount.
Feb  9 18:42:41.785674 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount.
Feb  9 18:42:41.785685 systemd[1]: Reached target integritysetup.target.
Feb  9 18:42:41.785695 systemd[1]: Reached target remote-cryptsetup.target.
Feb  9 18:42:41.785706 systemd[1]: Reached target remote-fs.target.
Feb  9 18:42:41.785717 systemd[1]: Reached target slices.target.
Feb  9 18:42:41.785727 systemd[1]: Reached target swap.target.
Feb  9 18:42:41.785738 systemd[1]: Reached target torcx.target.
Feb  9 18:42:41.785748 systemd[1]: Reached target veritysetup.target.
Feb  9 18:42:41.785770 systemd[1]: Listening on systemd-coredump.socket.
Feb  9 18:42:41.785784 systemd[1]: Listening on systemd-initctl.socket.
Feb  9 18:42:41.785831 systemd[1]: Listening on systemd-journald-audit.socket.
Feb  9 18:42:41.785843 systemd[1]: Listening on systemd-journald-dev-log.socket.
Feb  9 18:42:41.785853 systemd[1]: Listening on systemd-journald.socket.
Feb  9 18:42:41.785864 systemd[1]: Listening on systemd-networkd.socket.
Feb  9 18:42:41.785874 systemd[1]: Listening on systemd-udevd-control.socket.
Feb  9 18:42:41.785885 systemd[1]: Listening on systemd-udevd-kernel.socket.
Feb  9 18:42:41.785895 systemd[1]: Listening on systemd-userdbd.socket.
Feb  9 18:42:41.785905 systemd[1]: Mounting dev-hugepages.mount...
Feb  9 18:42:41.785920 systemd[1]: Mounting dev-mqueue.mount...
Feb  9 18:42:41.785931 systemd[1]: Mounting media.mount...
Feb  9 18:42:41.785942 systemd[1]: Mounting sys-kernel-debug.mount...
Feb  9 18:42:41.785952 systemd[1]: Mounting sys-kernel-tracing.mount...
Feb  9 18:42:41.785963 systemd[1]: Mounting tmp.mount...
Feb  9 18:42:41.785974 systemd[1]: Starting flatcar-tmpfiles.service...
Feb  9 18:42:41.785984 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met.
Feb  9 18:42:41.785995 systemd[1]: Starting kmod-static-nodes.service...
Feb  9 18:42:41.786005 systemd[1]: Starting modprobe@configfs.service...
Feb  9 18:42:41.786017 systemd[1]: Starting modprobe@dm_mod.service...
Feb  9 18:42:41.786027 systemd[1]: Starting modprobe@drm.service...
Feb  9 18:42:41.786038 systemd[1]: Starting modprobe@efi_pstore.service...
Feb  9 18:42:41.786048 systemd[1]: Starting modprobe@fuse.service...
Feb  9 18:42:41.786059 systemd[1]: Starting modprobe@loop.service...
Feb  9 18:42:41.786069 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Feb  9 18:42:41.786080 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
Feb  9 18:42:41.786091 systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
Feb  9 18:42:41.786102 systemd[1]: Starting systemd-journald.service...
Feb  9 18:42:41.786113 systemd[1]: Starting systemd-modules-load.service...
Feb  9 18:42:41.786124 systemd[1]: Starting systemd-network-generator.service...
Feb  9 18:42:41.786134 kernel: fuse: init (API version 7.34)
Feb  9 18:42:41.786146 systemd[1]: Starting systemd-remount-fs.service...
Feb  9 18:42:41.786157 kernel: loop: module loaded
Feb  9 18:42:41.786167 systemd[1]: Starting systemd-udev-trigger.service...
Feb  9 18:42:41.786178 systemd[1]: Mounted dev-hugepages.mount.
Feb  9 18:42:41.786188 systemd[1]: Mounted dev-mqueue.mount.
Feb  9 18:42:41.786198 systemd[1]: Mounted media.mount.
Feb  9 18:42:41.786210 systemd[1]: Mounted sys-kernel-debug.mount.
Feb  9 18:42:41.786221 systemd[1]: Mounted sys-kernel-tracing.mount.
Feb  9 18:42:41.786233 systemd[1]: Mounted tmp.mount.
Feb  9 18:42:41.786244 systemd[1]: Finished kmod-static-nodes.service.
Feb  9 18:42:41.786254 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb  9 18:42:41.786265 systemd[1]: Finished modprobe@configfs.service.
Feb  9 18:42:41.786276 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb  9 18:42:41.786287 systemd[1]: Finished modprobe@dm_mod.service.
Feb  9 18:42:41.786300 systemd-journald[1032]: Journal started
Feb  9 18:42:41.786344 systemd-journald[1032]: Runtime Journal (/run/log/journal/48798b8f6a324348b55084c8cc7d3be9) is 6.0M, max 48.7M, 42.6M free.
Feb  9 18:42:41.700000 audit[1]: AVC avc:  denied  { audit_read } for  pid=1 comm="systemd" capability=37  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1
Feb  9 18:42:41.700000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1
Feb  9 18:42:41.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:41.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:41.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:41.784000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1
Feb  9 18:42:41.784000 audit[1032]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=fffffc556bf0 a2=4000 a3=1 items=0 ppid=1 pid=1032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb  9 18:42:41.784000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald"
Feb  9 18:42:41.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:41.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:41.787233 systemd[1]: Started systemd-journald.service.
Feb  9 18:42:41.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:41.788833 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb  9 18:42:41.789011 systemd[1]: Finished modprobe@drm.service.
Feb  9 18:42:41.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:41.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:41.789967 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb  9 18:42:41.790182 systemd[1]: Finished modprobe@efi_pstore.service.
Feb  9 18:42:41.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:41.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:41.791305 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Feb  9 18:42:41.791513 systemd[1]: Finished modprobe@fuse.service.
Feb  9 18:42:41.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:41.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:41.792522 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb  9 18:42:41.793115 systemd[1]: Finished modprobe@loop.service.
Feb  9 18:42:41.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:41.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:41.794603 systemd[1]: Finished systemd-modules-load.service.
Feb  9 18:42:41.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:41.797656 systemd[1]: Finished systemd-network-generator.service.
Feb  9 18:42:41.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:41.798884 systemd[1]: Finished systemd-remount-fs.service.
Feb  9 18:42:41.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:41.800204 systemd[1]: Reached target network-pre.target.
Feb  9 18:42:41.802256 systemd[1]: Mounting sys-fs-fuse-connections.mount...
Feb  9 18:42:41.804116 systemd[1]: Mounting sys-kernel-config.mount...
Feb  9 18:42:41.804799 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Feb  9 18:42:41.807116 systemd[1]: Starting systemd-hwdb-update.service...
Feb  9 18:42:41.809063 systemd[1]: Starting systemd-journal-flush.service...
Feb  9 18:42:41.809875 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb  9 18:42:41.811174 systemd[1]: Starting systemd-random-seed.service...
Feb  9 18:42:41.811941 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met.
Feb  9 18:42:41.813111 systemd[1]: Starting systemd-sysctl.service...
Feb  9 18:42:41.817156 systemd-journald[1032]: Time spent on flushing to /var/log/journal/48798b8f6a324348b55084c8cc7d3be9 is 15.464ms for 942 entries.
Feb  9 18:42:41.817156 systemd-journald[1032]: System Journal (/var/log/journal/48798b8f6a324348b55084c8cc7d3be9) is 8.0M, max 195.6M, 187.6M free.
Feb  9 18:42:41.840494 systemd-journald[1032]: Received client request to flush runtime journal.
Feb  9 18:42:41.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:41.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:41.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:41.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:41.817634 systemd[1]: Finished flatcar-tmpfiles.service.
Feb  9 18:42:41.818944 systemd[1]: Mounted sys-fs-fuse-connections.mount.
Feb  9 18:42:41.819772 systemd[1]: Mounted sys-kernel-config.mount.
Feb  9 18:42:41.821986 systemd[1]: Starting systemd-sysusers.service...
Feb  9 18:42:41.826519 systemd[1]: Finished systemd-random-seed.service.
Feb  9 18:42:41.827406 systemd[1]: Reached target first-boot-complete.target.
Feb  9 18:42:41.835990 systemd[1]: Finished systemd-sysctl.service.
Feb  9 18:42:41.837950 systemd[1]: Finished systemd-udev-trigger.service.
Feb  9 18:42:41.839991 systemd[1]: Starting systemd-udev-settle.service...
Feb  9 18:42:41.841741 systemd[1]: Finished systemd-journal-flush.service.
Feb  9 18:42:41.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:41.847763 udevadm[1088]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in.
Feb  9 18:42:41.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:41.856564 systemd[1]: Finished systemd-sysusers.service.
Feb  9 18:42:41.858648 systemd[1]: Starting systemd-tmpfiles-setup-dev.service...
Feb  9 18:42:41.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:41.874105 systemd[1]: Finished systemd-tmpfiles-setup-dev.service.
Feb  9 18:42:42.174354 systemd[1]: Finished systemd-hwdb-update.service.
Feb  9 18:42:42.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:42.176462 systemd[1]: Starting systemd-udevd.service...
Feb  9 18:42:42.195627 systemd-udevd[1095]: Using default interface naming scheme 'v252'.
Feb  9 18:42:42.206830 systemd[1]: Started systemd-udevd.service.
Feb  9 18:42:42.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:42.209348 systemd[1]: Starting systemd-networkd.service...
Feb  9 18:42:42.221173 systemd[1]: Starting systemd-userdbd.service...
Feb  9 18:42:42.229677 systemd[1]: Found device dev-ttyAMA0.device.
Feb  9 18:42:42.280045 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device.
Feb  9 18:42:42.281002 systemd[1]: Started systemd-userdbd.service.
Feb  9 18:42:42.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:42.313220 systemd[1]: Finished systemd-udev-settle.service.
Feb  9 18:42:42.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:42.315264 systemd[1]: Starting lvm2-activation-early.service...
Feb  9 18:42:42.326607 lvm[1128]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb  9 18:42:42.337874 systemd-networkd[1103]: lo: Link UP
Feb  9 18:42:42.337883 systemd-networkd[1103]: lo: Gained carrier
Feb  9 18:42:42.338215 systemd-networkd[1103]: Enumeration completed
Feb  9 18:42:42.338319 systemd-networkd[1103]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb  9 18:42:42.338335 systemd[1]: Started systemd-networkd.service.
Feb  9 18:42:42.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:42.344783 systemd-networkd[1103]: eth0: Link UP
Feb  9 18:42:42.344803 systemd-networkd[1103]: eth0: Gained carrier
Feb  9 18:42:42.361628 systemd[1]: Finished lvm2-activation-early.service.
Feb  9 18:42:42.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:42.362478 systemd[1]: Reached target cryptsetup.target.
Feb  9 18:42:42.364233 systemd[1]: Starting lvm2-activation.service...
Feb  9 18:42:42.367669 lvm[1132]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb  9 18:42:42.371121 systemd-networkd[1103]: eth0: DHCPv4 address 10.0.0.123/16, gateway 10.0.0.1 acquired from 10.0.0.1
Feb  9 18:42:42.409626 systemd[1]: Finished lvm2-activation.service.
Feb  9 18:42:42.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:42.410378 systemd[1]: Reached target local-fs-pre.target.
Feb  9 18:42:42.411013 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Feb  9 18:42:42.411039 systemd[1]: Reached target local-fs.target.
Feb  9 18:42:42.411573 systemd[1]: Reached target machines.target.
Feb  9 18:42:42.413351 systemd[1]: Starting ldconfig.service...
Feb  9 18:42:42.414138 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Feb  9 18:42:42.414203 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb  9 18:42:42.415264 systemd[1]: Starting systemd-boot-update.service...
Feb  9 18:42:42.416861 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service...
Feb  9 18:42:42.418875 systemd[1]: Starting systemd-machine-id-commit.service...
Feb  9 18:42:42.419688 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met.
Feb  9 18:42:42.419769 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met.
Feb  9 18:42:42.420844 systemd[1]: Starting systemd-tmpfiles-setup.service...
Feb  9 18:42:42.421917 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1135 (bootctl)
Feb  9 18:42:42.423105 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service...
Feb  9 18:42:42.432238 systemd-tmpfiles[1138]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring.
Feb  9 18:42:42.435484 systemd-tmpfiles[1138]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Feb  9 18:42:42.437054 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service.
Feb  9 18:42:42.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:42.441042 systemd-tmpfiles[1138]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Feb  9 18:42:42.502229 systemd[1]: Finished systemd-machine-id-commit.service.
Feb  9 18:42:42.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:42.521033 systemd-fsck[1143]: fsck.fat 4.2 (2021-01-31)
Feb  9 18:42:42.521033 systemd-fsck[1143]: /dev/vda1: 236 files, 113719/258078 clusters
Feb  9 18:42:42.523271 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service.
Feb  9 18:42:42.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:42.588134 ldconfig[1134]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Feb  9 18:42:42.591564 systemd[1]: Finished ldconfig.service.
Feb  9 18:42:42.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:42.770820 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Feb  9 18:42:42.772277 systemd[1]: Mounting boot.mount...
Feb  9 18:42:42.779081 systemd[1]: Mounted boot.mount.
Feb  9 18:42:42.785973 systemd[1]: Finished systemd-boot-update.service.
Feb  9 18:42:42.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:42.835633 systemd[1]: Finished systemd-tmpfiles-setup.service.
Feb  9 18:42:42.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:42.837662 systemd[1]: Starting audit-rules.service...
Feb  9 18:42:42.839413 systemd[1]: Starting clean-ca-certificates.service...
Feb  9 18:42:42.841113 systemd[1]: Starting systemd-journal-catalog-update.service...
Feb  9 18:42:42.843524 systemd[1]: Starting systemd-resolved.service...
Feb  9 18:42:42.845922 systemd[1]: Starting systemd-timesyncd.service...
Feb  9 18:42:42.847940 systemd[1]: Starting systemd-update-utmp.service...
Feb  9 18:42:42.849352 systemd[1]: Finished clean-ca-certificates.service.
Feb  9 18:42:42.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:42.850442 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Feb  9 18:42:42.851000 audit[1165]: SYSTEM_BOOT pid=1165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:42.855030 systemd[1]: Finished systemd-update-utmp.service.
Feb  9 18:42:42.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:42.862337 systemd[1]: Finished systemd-journal-catalog-update.service.
Feb  9 18:42:42.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:42.864401 systemd[1]: Starting systemd-update-done.service...
Feb  9 18:42:42.873868 systemd[1]: Finished systemd-update-done.service.
Feb  9 18:42:42.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:42:42.881000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1
Feb  9 18:42:42.881000 audit[1179]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc30eafc0 a2=420 a3=0 items=0 ppid=1153 pid=1179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb  9 18:42:42.881000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573
Feb  9 18:42:42.882155 augenrules[1179]: No rules
Feb  9 18:42:42.882841 systemd[1]: Finished audit-rules.service.
Feb  9 18:42:42.901543 systemd-resolved[1158]: Positive Trust Anchors:
Feb  9 18:42:42.901558 systemd-resolved[1158]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb  9 18:42:42.901583 systemd[1]: Started systemd-timesyncd.service.
Feb  9 18:42:42.901585 systemd-resolved[1158]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Feb  9 18:42:42.902631 systemd-timesyncd[1164]: Contacted time server 10.0.0.1:123 (10.0.0.1).
Feb  9 18:42:42.902736 systemd[1]: Reached target time-set.target.
Feb  9 18:42:42.903004 systemd-timesyncd[1164]: Initial clock synchronization to Fri 2024-02-09 18:42:42.655028 UTC.
Feb  9 18:42:42.912892 systemd-resolved[1158]: Defaulting to hostname 'linux'.
Feb  9 18:42:42.914256 systemd[1]: Started systemd-resolved.service.
Feb  9 18:42:42.915104 systemd[1]: Reached target network.target.
Feb  9 18:42:42.915816 systemd[1]: Reached target nss-lookup.target.
Feb  9 18:42:42.916569 systemd[1]: Reached target sysinit.target.
Feb  9 18:42:42.917340 systemd[1]: Started motdgen.path.
Feb  9 18:42:42.918025 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path.
Feb  9 18:42:42.919170 systemd[1]: Started logrotate.timer.
Feb  9 18:42:42.919955 systemd[1]: Started mdadm.timer.
Feb  9 18:42:42.920573 systemd[1]: Started systemd-tmpfiles-clean.timer.
Feb  9 18:42:42.921306 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Feb  9 18:42:42.921335 systemd[1]: Reached target paths.target.
Feb  9 18:42:42.922016 systemd[1]: Reached target timers.target.
Feb  9 18:42:42.923186 systemd[1]: Listening on dbus.socket.
Feb  9 18:42:42.924993 systemd[1]: Starting docker.socket...
Feb  9 18:42:42.926531 systemd[1]: Listening on sshd.socket.
Feb  9 18:42:42.927269 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb  9 18:42:42.927585 systemd[1]: Listening on docker.socket.
Feb  9 18:42:42.928279 systemd[1]: Reached target sockets.target.
Feb  9 18:42:42.928958 systemd[1]: Reached target basic.target.
Feb  9 18:42:42.929770 systemd[1]: System is tainted: cgroupsv1
Feb  9 18:42:42.929831 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met.
Feb  9 18:42:42.929850 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met.
Feb  9 18:42:42.930950 systemd[1]: Starting containerd.service...
Feb  9 18:42:42.932623 systemd[1]: Starting dbus.service...
Feb  9 18:42:42.934218 systemd[1]: Starting enable-oem-cloudinit.service...
Feb  9 18:42:42.936118 systemd[1]: Starting extend-filesystems.service...
Feb  9 18:42:42.936904 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment).
Feb  9 18:42:42.938063 systemd[1]: Starting motdgen.service...
Feb  9 18:42:42.939873 systemd[1]: Starting prepare-cni-plugins.service...
Feb  9 18:42:42.942341 systemd[1]: Starting prepare-critools.service...
Feb  9 18:42:42.946859 systemd[1]: Starting ssh-key-proc-cmdline.service...
Feb  9 18:42:42.948856 systemd[1]: Starting sshd-keygen.service...
Feb  9 18:42:42.951704 systemd[1]: Starting systemd-logind.service...
Feb  9 18:42:42.952668 jq[1191]: false
Feb  9 18:42:42.952655 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb  9 18:42:42.952720 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Feb  9 18:42:42.953075 extend-filesystems[1192]: Found vda
Feb  9 18:42:42.953856 extend-filesystems[1192]: Found vda1
Feb  9 18:42:42.953856 extend-filesystems[1192]: Found vda2
Feb  9 18:42:42.953856 extend-filesystems[1192]: Found vda3
Feb  9 18:42:42.953856 extend-filesystems[1192]: Found usr
Feb  9 18:42:42.953856 extend-filesystems[1192]: Found vda4
Feb  9 18:42:42.953856 extend-filesystems[1192]: Found vda6
Feb  9 18:42:42.953856 extend-filesystems[1192]: Found vda7
Feb  9 18:42:42.953856 extend-filesystems[1192]: Found vda9
Feb  9 18:42:42.953856 extend-filesystems[1192]: Checking size of /dev/vda9
Feb  9 18:42:42.954057 systemd[1]: Starting update-engine.service...
Feb  9 18:42:42.956820 systemd[1]: Starting update-ssh-keys-after-ignition.service...
Feb  9 18:42:42.961958 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Feb  9 18:42:42.968350 jq[1211]: true
Feb  9 18:42:42.962196 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped.
Feb  9 18:42:42.962468 systemd[1]: motdgen.service: Deactivated successfully.
Feb  9 18:42:42.962775 systemd[1]: Finished motdgen.service.
Feb  9 18:42:42.991040 jq[1221]: true
Feb  9 18:42:42.991154 tar[1218]: ./
Feb  9 18:42:42.991154 tar[1218]: ./macvlan
Feb  9 18:42:42.974888 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Feb  9 18:42:42.975101 systemd[1]: Finished ssh-key-proc-cmdline.service.
Feb  9 18:42:42.993711 extend-filesystems[1192]: Resized partition /dev/vda9
Feb  9 18:42:43.007941 extend-filesystems[1245]: resize2fs 1.46.5 (30-Dec-2021)
Feb  9 18:42:43.009028 tar[1220]: crictl
Feb  9 18:42:43.017343 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks
Feb  9 18:42:43.029283 dbus-daemon[1190]: [system] SELinux support is enabled
Feb  9 18:42:43.029464 systemd[1]: Started dbus.service.
Feb  9 18:42:43.031908 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Feb  9 18:42:43.031972 systemd[1]: Reached target system-config.target.
Feb  9 18:42:43.033144 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Feb  9 18:42:43.033166 systemd[1]: Reached target user-config.target.
Feb  9 18:42:43.042857 kernel: EXT4-fs (vda9): resized filesystem to 1864699
Feb  9 18:42:43.055362 systemd-logind[1204]: Watching system buttons on /dev/input/event0 (Power Button)
Feb  9 18:42:43.057447 systemd-logind[1204]: New seat seat0.
Feb  9 18:42:43.061552 extend-filesystems[1245]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required
Feb  9 18:42:43.061552 extend-filesystems[1245]: old_desc_blocks = 1, new_desc_blocks = 1
Feb  9 18:42:43.061552 extend-filesystems[1245]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long.
Feb  9 18:42:43.067686 bash[1252]: Updated "/home/core/.ssh/authorized_keys"
Feb  9 18:42:43.059499 systemd[1]: extend-filesystems.service: Deactivated successfully.
Feb  9 18:42:43.067825 extend-filesystems[1192]: Resized filesystem in /dev/vda9
Feb  9 18:42:43.059743 systemd[1]: Finished extend-filesystems.service.
Feb  9 18:42:43.061379 systemd[1]: Finished update-ssh-keys-after-ignition.service.
Feb  9 18:42:43.063174 systemd[1]: Started systemd-logind.service.
Feb  9 18:42:43.070335 update_engine[1206]: I0209 18:42:43.069950  1206 main.cc:92] Flatcar Update Engine starting
Feb  9 18:42:43.072728 tar[1218]: ./static
Feb  9 18:42:43.074507 systemd[1]: Started update-engine.service.
Feb  9 18:42:43.074666 update_engine[1206]: I0209 18:42:43.074647  1206 update_check_scheduler.cc:74] Next update check in 6m29s
Feb  9 18:42:43.076728 systemd[1]: Started locksmithd.service.
Feb  9 18:42:43.099656 tar[1218]: ./vlan
Feb  9 18:42:43.139542 tar[1218]: ./portmap
Feb  9 18:42:43.148145 env[1222]: time="2024-02-09T18:42:43.148081361Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16
Feb  9 18:42:43.170196 tar[1218]: ./host-local
Feb  9 18:42:43.176907 env[1222]: time="2024-02-09T18:42:43.176857831Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Feb  9 18:42:43.177041 env[1222]: time="2024-02-09T18:42:43.177020517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Feb  9 18:42:43.177931 locksmithd[1256]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Feb  9 18:42:43.178318 env[1222]: time="2024-02-09T18:42:43.178280642Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Feb  9 18:42:43.178318 env[1222]: time="2024-02-09T18:42:43.178313706Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Feb  9 18:42:43.178585 env[1222]: time="2024-02-09T18:42:43.178556242Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb  9 18:42:43.178585 env[1222]: time="2024-02-09T18:42:43.178581864Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Feb  9 18:42:43.178645 env[1222]: time="2024-02-09T18:42:43.178596942Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Feb  9 18:42:43.178645 env[1222]: time="2024-02-09T18:42:43.178606439Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Feb  9 18:42:43.178694 env[1222]: time="2024-02-09T18:42:43.178677995Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Feb  9 18:42:43.179010 env[1222]: time="2024-02-09T18:42:43.178981659Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Feb  9 18:42:43.179157 env[1222]: time="2024-02-09T18:42:43.179132406Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb  9 18:42:43.179157 env[1222]: time="2024-02-09T18:42:43.179154849Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Feb  9 18:42:43.179230 env[1222]: time="2024-02-09T18:42:43.179212023Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Feb  9 18:42:43.179230 env[1222]: time="2024-02-09T18:42:43.179228110Z" level=info msg="metadata content store policy set" policy=shared
Feb  9 18:42:43.182362 env[1222]: time="2024-02-09T18:42:43.182333749Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Feb  9 18:42:43.182362 env[1222]: time="2024-02-09T18:42:43.182362976Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Feb  9 18:42:43.182451 env[1222]: time="2024-02-09T18:42:43.182382125Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Feb  9 18:42:43.182451 env[1222]: time="2024-02-09T18:42:43.182412863Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Feb  9 18:42:43.182451 env[1222]: time="2024-02-09T18:42:43.182426275Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Feb  9 18:42:43.182451 env[1222]: time="2024-02-09T18:42:43.182439183Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Feb  9 18:42:43.182536 env[1222]: time="2024-02-09T18:42:43.182452556Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Feb  9 18:42:43.182859 env[1222]: time="2024-02-09T18:42:43.182838086Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Feb  9 18:42:43.182904 env[1222]: time="2024-02-09T18:42:43.182862817Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
Feb  9 18:42:43.182904 env[1222]: time="2024-02-09T18:42:43.182876383Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Feb  9 18:42:43.182904 env[1222]: time="2024-02-09T18:42:43.182888283Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Feb  9 18:42:43.182904 env[1222]: time="2024-02-09T18:42:43.182899563Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Feb  9 18:42:43.183042 env[1222]: time="2024-02-09T18:42:43.183021122Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Feb  9 18:42:43.183126 env[1222]: time="2024-02-09T18:42:43.183107678Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Feb  9 18:42:43.183415 env[1222]: time="2024-02-09T18:42:43.183395295Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Feb  9 18:42:43.183451 env[1222]: time="2024-02-09T18:42:43.183425258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Feb  9 18:42:43.183451 env[1222]: time="2024-02-09T18:42:43.183439484Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Feb  9 18:42:43.183552 env[1222]: time="2024-02-09T18:42:43.183534568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Feb  9 18:42:43.183587 env[1222]: time="2024-02-09T18:42:43.183552127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Feb  9 18:42:43.183587 env[1222]: time="2024-02-09T18:42:43.183565500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Feb  9 18:42:43.183587 env[1222]: time="2024-02-09T18:42:43.183577245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Feb  9 18:42:43.183641 env[1222]: time="2024-02-09T18:42:43.183588215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Feb  9 18:42:43.183641 env[1222]: time="2024-02-09T18:42:43.183600813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Feb  9 18:42:43.183641 env[1222]: time="2024-02-09T18:42:43.183611899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Feb  9 18:42:43.183641 env[1222]: time="2024-02-09T18:42:43.183623295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Feb  9 18:42:43.183641 env[1222]: time="2024-02-09T18:42:43.183635544Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Feb  9 18:42:43.183776 env[1222]: time="2024-02-09T18:42:43.183749699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Feb  9 18:42:43.183816 env[1222]: time="2024-02-09T18:42:43.183782104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Feb  9 18:42:43.183816 env[1222]: time="2024-02-09T18:42:43.183807649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Feb  9 18:42:43.183861 env[1222]: time="2024-02-09T18:42:43.183820673Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Feb  9 18:42:43.183861 env[1222]: time="2024-02-09T18:42:43.183834705Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
Feb  9 18:42:43.183861 env[1222]: time="2024-02-09T18:42:43.183845054Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Feb  9 18:42:43.183923 env[1222]: time="2024-02-09T18:42:43.183863389Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
Feb  9 18:42:43.183923 env[1222]: time="2024-02-09T18:42:43.183898934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Feb  9 18:42:43.184153 env[1222]: time="2024-02-09T18:42:43.184101584Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Feb  9 18:42:43.186479 env[1222]: time="2024-02-09T18:42:43.184157750Z" level=info msg="Connect containerd service"
Feb  9 18:42:43.186479 env[1222]: time="2024-02-09T18:42:43.184195505Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Feb  9 18:42:43.186479 env[1222]: time="2024-02-09T18:42:43.185028895Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb  9 18:42:43.186479 env[1222]: time="2024-02-09T18:42:43.185391440Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Feb  9 18:42:43.186479 env[1222]: time="2024-02-09T18:42:43.185427644Z" level=info msg=serving... address=/run/containerd/containerd.sock
Feb  9 18:42:43.185581 systemd[1]: Started containerd.service.
Feb  9 18:42:43.186657 env[1222]: time="2024-02-09T18:42:43.186534774Z" level=info msg="containerd successfully booted in 0.039644s"
Feb  9 18:42:43.188120 env[1222]: time="2024-02-09T18:42:43.186707266Z" level=info msg="Start subscribing containerd event"
Feb  9 18:42:43.188120 env[1222]: time="2024-02-09T18:42:43.186862548Z" level=info msg="Start recovering state"
Feb  9 18:42:43.188120 env[1222]: time="2024-02-09T18:42:43.186923521Z" level=info msg="Start event monitor"
Feb  9 18:42:43.188120 env[1222]: time="2024-02-09T18:42:43.186943096Z" level=info msg="Start snapshots syncer"
Feb  9 18:42:43.188120 env[1222]: time="2024-02-09T18:42:43.186952787Z" level=info msg="Start cni network conf syncer for default"
Feb  9 18:42:43.188120 env[1222]: time="2024-02-09T18:42:43.186961005Z" level=info msg="Start streaming server"
Feb  9 18:42:43.202295 tar[1218]: ./vrf
Feb  9 18:42:43.230421 tar[1218]: ./bridge
Feb  9 18:42:43.262927 tar[1218]: ./tuning
Feb  9 18:42:43.289893 tar[1218]: ./firewall
Feb  9 18:42:43.323808 tar[1218]: ./host-device
Feb  9 18:42:43.353874 tar[1218]: ./sbr
Feb  9 18:42:43.381507 tar[1218]: ./loopback
Feb  9 18:42:43.389268 systemd[1]: Finished prepare-critools.service.
Feb  9 18:42:43.408625 tar[1218]: ./dhcp
Feb  9 18:42:43.477422 tar[1218]: ./ptp
Feb  9 18:42:43.504567 tar[1218]: ./ipvlan
Feb  9 18:42:43.530970 tar[1218]: ./bandwidth
Feb  9 18:42:43.566823 systemd[1]: Finished prepare-cni-plugins.service.
Feb  9 18:42:44.072952 systemd-networkd[1103]: eth0: Gained IPv6LL
Feb  9 18:42:45.217166 sshd_keygen[1228]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Feb  9 18:42:45.233924 systemd[1]: Finished sshd-keygen.service.
Feb  9 18:42:45.236221 systemd[1]: Starting issuegen.service...
Feb  9 18:42:45.240569 systemd[1]: issuegen.service: Deactivated successfully.
Feb  9 18:42:45.240768 systemd[1]: Finished issuegen.service.
Feb  9 18:42:45.242931 systemd[1]: Starting systemd-user-sessions.service...
Feb  9 18:42:45.248231 systemd[1]: Finished systemd-user-sessions.service.
Feb  9 18:42:45.250305 systemd[1]: Started getty@tty1.service.
Feb  9 18:42:45.252196 systemd[1]: Started serial-getty@ttyAMA0.service.
Feb  9 18:42:45.253145 systemd[1]: Reached target getty.target.
Feb  9 18:42:45.253944 systemd[1]: Reached target multi-user.target.
Feb  9 18:42:45.255901 systemd[1]: Starting systemd-update-utmp-runlevel.service...
Feb  9 18:42:45.261594 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Feb  9 18:42:45.261891 systemd[1]: Finished systemd-update-utmp-runlevel.service.
Feb  9 18:42:45.262838 systemd[1]: Startup finished in 5.744s (kernel) + 5.525s (userspace) = 11.270s.
Feb  9 18:42:46.842710 systemd[1]: Created slice system-sshd.slice.
Feb  9 18:42:46.843910 systemd[1]: Started sshd@0-10.0.0.123:22-10.0.0.1:33038.service.
Feb  9 18:42:46.890695 sshd[1293]: Accepted publickey for core from 10.0.0.1 port 33038 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8
Feb  9 18:42:46.892594 sshd[1293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 18:42:46.901214 systemd-logind[1204]: New session 1 of user core.
Feb  9 18:42:46.902043 systemd[1]: Created slice user-500.slice.
Feb  9 18:42:46.903005 systemd[1]: Starting user-runtime-dir@500.service...
Feb  9 18:42:46.910871 systemd[1]: Finished user-runtime-dir@500.service.
Feb  9 18:42:46.912067 systemd[1]: Starting user@500.service...
Feb  9 18:42:46.915070 (systemd)[1298]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Feb  9 18:42:46.973084 systemd[1298]: Queued start job for default target default.target.
Feb  9 18:42:46.973308 systemd[1298]: Reached target paths.target.
Feb  9 18:42:46.973323 systemd[1298]: Reached target sockets.target.
Feb  9 18:42:46.973334 systemd[1298]: Reached target timers.target.
Feb  9 18:42:46.973354 systemd[1298]: Reached target basic.target.
Feb  9 18:42:46.973475 systemd[1]: Started user@500.service.
Feb  9 18:42:46.974143 systemd[1298]: Reached target default.target.
Feb  9 18:42:46.974274 systemd[1298]: Startup finished in 53ms.
Feb  9 18:42:46.974316 systemd[1]: Started session-1.scope.
Feb  9 18:42:47.023502 systemd[1]: Started sshd@1-10.0.0.123:22-10.0.0.1:33050.service.
Feb  9 18:42:47.069321 sshd[1307]: Accepted publickey for core from 10.0.0.1 port 33050 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8
Feb  9 18:42:47.070912 sshd[1307]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 18:42:47.074384 systemd-logind[1204]: New session 2 of user core.
Feb  9 18:42:47.075201 systemd[1]: Started session-2.scope.
Feb  9 18:42:47.128717 sshd[1307]: pam_unix(sshd:session): session closed for user core
Feb  9 18:42:47.130892 systemd[1]: Started sshd@2-10.0.0.123:22-10.0.0.1:33064.service.
Feb  9 18:42:47.131616 systemd[1]: sshd@1-10.0.0.123:22-10.0.0.1:33050.service: Deactivated successfully.
Feb  9 18:42:47.132467 systemd-logind[1204]: Session 2 logged out. Waiting for processes to exit.
Feb  9 18:42:47.132510 systemd[1]: session-2.scope: Deactivated successfully.
Feb  9 18:42:47.133142 systemd-logind[1204]: Removed session 2.
Feb  9 18:42:47.167080 sshd[1312]: Accepted publickey for core from 10.0.0.1 port 33064 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8
Feb  9 18:42:47.168301 sshd[1312]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 18:42:47.171534 systemd-logind[1204]: New session 3 of user core.
Feb  9 18:42:47.172331 systemd[1]: Started session-3.scope.
Feb  9 18:42:47.220012 sshd[1312]: pam_unix(sshd:session): session closed for user core
Feb  9 18:42:47.222263 systemd[1]: Started sshd@3-10.0.0.123:22-10.0.0.1:33074.service.
Feb  9 18:42:47.222894 systemd[1]: sshd@2-10.0.0.123:22-10.0.0.1:33064.service: Deactivated successfully.
Feb  9 18:42:47.223894 systemd-logind[1204]: Session 3 logged out. Waiting for processes to exit.
Feb  9 18:42:47.224369 systemd[1]: session-3.scope: Deactivated successfully.
Feb  9 18:42:47.224999 systemd-logind[1204]: Removed session 3.
Feb  9 18:42:47.258751 sshd[1319]: Accepted publickey for core from 10.0.0.1 port 33074 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8
Feb  9 18:42:47.260348 sshd[1319]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 18:42:47.263588 systemd-logind[1204]: New session 4 of user core.
Feb  9 18:42:47.264985 systemd[1]: Started session-4.scope.
Feb  9 18:42:47.317621 sshd[1319]: pam_unix(sshd:session): session closed for user core
Feb  9 18:42:47.320217 systemd[1]: Started sshd@4-10.0.0.123:22-10.0.0.1:33084.service.
Feb  9 18:42:47.320822 systemd[1]: sshd@3-10.0.0.123:22-10.0.0.1:33074.service: Deactivated successfully.
Feb  9 18:42:47.321807 systemd-logind[1204]: Session 4 logged out. Waiting for processes to exit.
Feb  9 18:42:47.322317 systemd[1]: session-4.scope: Deactivated successfully.
Feb  9 18:42:47.322910 systemd-logind[1204]: Removed session 4.
Feb  9 18:42:47.355239 sshd[1327]: Accepted publickey for core from 10.0.0.1 port 33084 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8
Feb  9 18:42:47.356443 sshd[1327]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 18:42:47.359598 systemd-logind[1204]: New session 5 of user core.
Feb  9 18:42:47.361049 systemd[1]: Started session-5.scope.
Feb  9 18:42:47.418459 sudo[1332]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Feb  9 18:42:47.418660 sudo[1332]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Feb  9 18:42:47.927921 systemd[1]: Reloading.
Feb  9 18:42:47.971494 /usr/lib/systemd/system-generators/torcx-generator[1362]: time="2024-02-09T18:42:47Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]"
Feb  9 18:42:47.971530 /usr/lib/systemd/system-generators/torcx-generator[1362]: time="2024-02-09T18:42:47Z" level=info msg="torcx already run"
Feb  9 18:42:48.034767 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Feb  9 18:42:48.034797 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb  9 18:42:48.051886 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb  9 18:42:48.108263 systemd[1]: Starting systemd-networkd-wait-online.service...
Feb  9 18:42:48.113915 systemd[1]: Finished systemd-networkd-wait-online.service.
Feb  9 18:42:48.114459 systemd[1]: Reached target network-online.target.
Feb  9 18:42:48.115997 systemd[1]: Started kubelet.service.
Feb  9 18:42:48.126680 systemd[1]: Starting coreos-metadata.service...
Feb  9 18:42:48.133476 systemd[1]: coreos-metadata.service: Deactivated successfully.
Feb  9 18:42:48.133945 systemd[1]: Finished coreos-metadata.service.
Feb  9 18:42:48.340067 kubelet[1407]: E0209 18:42:48.338039    1407 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set"
Feb  9 18:42:48.342209 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb  9 18:42:48.342352 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb  9 18:42:48.468329 systemd[1]: Stopped kubelet.service.
Feb  9 18:42:48.480925 systemd[1]: Reloading.
Feb  9 18:42:48.524745 /usr/lib/systemd/system-generators/torcx-generator[1477]: time="2024-02-09T18:42:48Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]"
Feb  9 18:42:48.525078 /usr/lib/systemd/system-generators/torcx-generator[1477]: time="2024-02-09T18:42:48Z" level=info msg="torcx already run"
Feb  9 18:42:48.583934 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Feb  9 18:42:48.584083 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb  9 18:42:48.601045 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb  9 18:42:48.659573 systemd[1]: Started kubelet.service.
Feb  9 18:42:48.698641 kubelet[1521]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI.
Feb  9 18:42:48.698641 kubelet[1521]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb  9 18:42:48.698999 kubelet[1521]: I0209 18:42:48.698729    1521 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb  9 18:42:48.700147 kubelet[1521]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI.
Feb  9 18:42:48.700147 kubelet[1521]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb  9 18:42:49.524709 kubelet[1521]: I0209 18:42:49.524668    1521 server.go:412] "Kubelet version" kubeletVersion="v1.26.5"
Feb  9 18:42:49.524709 kubelet[1521]: I0209 18:42:49.524698    1521 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb  9 18:42:49.524941 kubelet[1521]: I0209 18:42:49.524914    1521 server.go:836] "Client rotation is on, will bootstrap in background"
Feb  9 18:42:49.528160 kubelet[1521]: I0209 18:42:49.528143    1521 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb  9 18:42:49.530272 kubelet[1521]: W0209 18:42:49.530250    1521 machine.go:65] Cannot read vendor id correctly, set empty.
Feb  9 18:42:49.531869 kubelet[1521]: I0209 18:42:49.531840    1521 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Feb  9 18:42:49.532328 kubelet[1521]: I0209 18:42:49.532309    1521 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Feb  9 18:42:49.532405 kubelet[1521]: I0209 18:42:49.532392    1521 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]}
Feb  9 18:42:49.532501 kubelet[1521]: I0209 18:42:49.532409    1521 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
Feb  9 18:42:49.532501 kubelet[1521]: I0209 18:42:49.532421    1521 container_manager_linux.go:308] "Creating device plugin manager"
Feb  9 18:42:49.532614 kubelet[1521]: I0209 18:42:49.532592    1521 state_mem.go:36] "Initialized new in-memory state store"
Feb  9 18:42:49.536812 kubelet[1521]: I0209 18:42:49.536785    1521 kubelet.go:398] "Attempting to sync node with API server"
Feb  9 18:42:49.537012 kubelet[1521]: I0209 18:42:49.537002    1521 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests"
Feb  9 18:42:49.537340 kubelet[1521]: I0209 18:42:49.537328    1521 kubelet.go:297] "Adding apiserver pod source"
Feb  9 18:42:49.537811 kubelet[1521]: E0209 18:42:49.537437    1521 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:42:49.537811 kubelet[1521]: E0209 18:42:49.537459    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:42:49.537918 kubelet[1521]: I0209 18:42:49.537904    1521 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb  9 18:42:49.538746 kubelet[1521]: I0209 18:42:49.538729    1521 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1"
Feb  9 18:42:49.539650 kubelet[1521]: W0209 18:42:49.539632    1521 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Feb  9 18:42:49.540303 kubelet[1521]: I0209 18:42:49.540284    1521 server.go:1186] "Started kubelet"
Feb  9 18:42:49.541686 kubelet[1521]: I0209 18:42:49.541672    1521 server.go:161] "Starting to listen" address="0.0.0.0" port=10250
Feb  9 18:42:49.542317 kubelet[1521]: E0209 18:42:49.541135    1521 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
Feb  9 18:42:49.542384 kubelet[1521]: E0209 18:42:49.542342    1521 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Feb  9 18:42:49.542584 kernel: SELinux:  Context system_u:object_r:container_file_t:s0 is not valid (left unmapped).
Feb  9 18:42:49.542761 kubelet[1521]: I0209 18:42:49.542670    1521 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb  9 18:42:49.542961 kubelet[1521]: I0209 18:42:49.542943    1521 volume_manager.go:293] "Starting Kubelet Volume Manager"
Feb  9 18:42:49.543887 kubelet[1521]: I0209 18:42:49.543868    1521 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Feb  9 18:42:49.543989 kubelet[1521]: I0209 18:42:49.543282    1521 server.go:451] "Adding debug handlers to kubelet server"
Feb  9 18:42:49.552877 kubelet[1521]: W0209 18:42:49.552839    1521 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Feb  9 18:42:49.553241 kubelet[1521]: E0209 18:42:49.553219    1521 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Feb  9 18:42:49.553317 kubelet[1521]: E0209 18:42:49.553047    1521 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.123.17b245fed5f2205a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.123", UID:"10.0.0.123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.123"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 42, 49, 540264026, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 42, 49, 540264026, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:42:49.553549 kubelet[1521]: W0209 18:42:49.553140    1521 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.123" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Feb  9 18:42:49.553630 kubelet[1521]: E0209 18:42:49.553619    1521 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.123" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Feb  9 18:42:49.553676 kubelet[1521]: W0209 18:42:49.553166    1521 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Feb  9 18:42:49.553726 kubelet[1521]: E0209 18:42:49.553717    1521 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Feb  9 18:42:49.553796 kubelet[1521]: E0209 18:42:49.553202    1521 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "10.0.0.123" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
Feb  9 18:42:49.560149 kubelet[1521]: E0209 18:42:49.560036    1521 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.123.17b245fed6111e41", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.123", UID:"10.0.0.123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.123"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 42, 49, 542295105, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 42, 49, 542295105, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:42:49.575497 kubelet[1521]: I0209 18:42:49.575474    1521 cpu_manager.go:214] "Starting CPU manager" policy="none"
Feb  9 18:42:49.575497 kubelet[1521]: I0209 18:42:49.575493    1521 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Feb  9 18:42:49.575584 kubelet[1521]: I0209 18:42:49.575510    1521 state_mem.go:36] "Initialized new in-memory state store"
Feb  9 18:42:49.576185 kubelet[1521]: E0209 18:42:49.576109    1521 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.123.17b245fed8021252", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.123", UID:"10.0.0.123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.123 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.123"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 42, 49, 574863442, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 42, 49, 574863442, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:42:49.577093 kubelet[1521]: I0209 18:42:49.577072    1521 policy_none.go:49] "None policy: Start"
Feb  9 18:42:49.577156 kubelet[1521]: E0209 18:42:49.577055    1521 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.123.17b245fed8024d07", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.123", UID:"10.0.0.123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.123 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.123"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 42, 49, 574878471, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 42, 49, 574878471, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:42:49.577624 kubelet[1521]: I0209 18:42:49.577610    1521 memory_manager.go:169] "Starting memorymanager" policy="None"
Feb  9 18:42:49.577734 kubelet[1521]: I0209 18:42:49.577723    1521 state_mem.go:35] "Initializing new in-memory state store"
Feb  9 18:42:49.577838 kubelet[1521]: E0209 18:42:49.577755    1521 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.123.17b245fed8025b82", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.123", UID:"10.0.0.123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.123 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.123"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 42, 49, 574882178, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 42, 49, 574882178, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:42:49.582784 kubelet[1521]: I0209 18:42:49.582755    1521 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Feb  9 18:42:49.583081 kubelet[1521]: I0209 18:42:49.583067    1521 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Feb  9 18:42:49.584386 kubelet[1521]: E0209 18:42:49.584302    1521 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.123.17b245fed885c717", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.123", UID:"10.0.0.123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.123"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 42, 49, 583494935, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 42, 49, 583494935, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:42:49.584479 kubelet[1521]: E0209 18:42:49.584467    1521 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.123\" not found"
Feb  9 18:42:49.644139 kubelet[1521]: I0209 18:42:49.644116    1521 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.123"
Feb  9 18:42:49.645159 kubelet[1521]: E0209 18:42:49.645140    1521 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.123"
Feb  9 18:42:49.645594 kubelet[1521]: E0209 18:42:49.645531    1521 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.123.17b245fed8021252", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.123", UID:"10.0.0.123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.123 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.123"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 42, 49, 574863442, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 42, 49, 644068547, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.123.17b245fed8021252" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:42:49.646432 kubelet[1521]: E0209 18:42:49.646380    1521 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.123.17b245fed8024d07", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.123", UID:"10.0.0.123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.123 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.123"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 42, 49, 574878471, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 42, 49, 644082352, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.123.17b245fed8024d07" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:42:49.647211 kubelet[1521]: E0209 18:42:49.647153    1521 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.123.17b245fed8025b82", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.123", UID:"10.0.0.123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.123 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.123"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 42, 49, 574882178, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 42, 49, 644087322, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.123.17b245fed8025b82" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:42:49.684340 kubelet[1521]: I0209 18:42:49.684316    1521 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4
Feb  9 18:42:49.702675 kubelet[1521]: I0209 18:42:49.702646    1521 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6
Feb  9 18:42:49.703032 kubelet[1521]: I0209 18:42:49.703017    1521 status_manager.go:176] "Starting to sync pod status with apiserver"
Feb  9 18:42:49.703123 kubelet[1521]: I0209 18:42:49.703112    1521 kubelet.go:2113] "Starting kubelet main sync loop"
Feb  9 18:42:49.703216 kubelet[1521]: E0209 18:42:49.703207    1521 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
Feb  9 18:42:49.704698 kubelet[1521]: W0209 18:42:49.704671    1521 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
Feb  9 18:42:49.704698 kubelet[1521]: E0209 18:42:49.704699    1521 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
Feb  9 18:42:49.755699 kubelet[1521]: E0209 18:42:49.755669    1521 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "10.0.0.123" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
Feb  9 18:42:49.846705 kubelet[1521]: I0209 18:42:49.846613    1521 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.123"
Feb  9 18:42:49.847776 kubelet[1521]: E0209 18:42:49.847752    1521 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.123"
Feb  9 18:42:49.848775 kubelet[1521]: E0209 18:42:49.848677    1521 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.123.17b245fed8021252", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.123", UID:"10.0.0.123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.123 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.123"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 42, 49, 574863442, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 42, 49, 846571807, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.123.17b245fed8021252" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:42:49.849603 kubelet[1521]: E0209 18:42:49.849517    1521 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.123.17b245fed8024d07", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.123", UID:"10.0.0.123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.123 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.123"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 42, 49, 574878471, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 42, 49, 846584193, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.123.17b245fed8024d07" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:42:49.942862 kubelet[1521]: E0209 18:42:49.942771    1521 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.123.17b245fed8025b82", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.123", UID:"10.0.0.123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.123 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.123"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 42, 49, 574882178, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 42, 49, 846587703, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.123.17b245fed8025b82" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:42:50.157113 kubelet[1521]: E0209 18:42:50.157023    1521 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "10.0.0.123" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
Feb  9 18:42:50.248913 kubelet[1521]: I0209 18:42:50.248875    1521 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.123"
Feb  9 18:42:50.251215 kubelet[1521]: E0209 18:42:50.251194    1521 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.123"
Feb  9 18:42:50.251300 kubelet[1521]: E0209 18:42:50.251195    1521 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.123.17b245fed8021252", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.123", UID:"10.0.0.123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.123 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.123"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 42, 49, 574863442, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 42, 50, 248830592, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.123.17b245fed8021252" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:42:50.346605 kubelet[1521]: E0209 18:42:50.346518    1521 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.123.17b245fed8024d07", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.123", UID:"10.0.0.123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.123 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.123"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 42, 49, 574878471, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 42, 50, 248842841, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.123.17b245fed8024d07" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:42:50.538372 kubelet[1521]: E0209 18:42:50.538267    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:42:50.542622 kubelet[1521]: E0209 18:42:50.542531    1521 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.123.17b245fed8025b82", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.123", UID:"10.0.0.123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.123 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.123"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 42, 49, 574882178, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 42, 50, 248847780, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.123.17b245fed8025b82" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:42:50.548643 kubelet[1521]: W0209 18:42:50.548625    1521 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
Feb  9 18:42:50.548693 kubelet[1521]: E0209 18:42:50.548654    1521 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
Feb  9 18:42:50.850311 kubelet[1521]: W0209 18:42:50.850223    1521 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.123" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Feb  9 18:42:50.850311 kubelet[1521]: E0209 18:42:50.850256    1521 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.123" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Feb  9 18:42:50.932103 kubelet[1521]: W0209 18:42:50.932077    1521 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Feb  9 18:42:50.932103 kubelet[1521]: E0209 18:42:50.932099    1521 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Feb  9 18:42:50.958101 kubelet[1521]: E0209 18:42:50.958081    1521 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "10.0.0.123" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
Feb  9 18:42:51.052058 kubelet[1521]: I0209 18:42:51.052028    1521 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.123"
Feb  9 18:42:51.053117 kubelet[1521]: E0209 18:42:51.053082    1521 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.123"
Feb  9 18:42:51.053228 kubelet[1521]: E0209 18:42:51.053074    1521 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.123.17b245fed8021252", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.123", UID:"10.0.0.123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.123 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.123"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 42, 49, 574863442, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 42, 51, 51991545, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.123.17b245fed8021252" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:42:51.054124 kubelet[1521]: E0209 18:42:51.054069    1521 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.123.17b245fed8024d07", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.123", UID:"10.0.0.123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.123 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.123"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 42, 49, 574878471, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 42, 51, 52001478, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.123.17b245fed8024d07" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:42:51.098131 kubelet[1521]: W0209 18:42:51.098111    1521 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Feb  9 18:42:51.098222 kubelet[1521]: E0209 18:42:51.098212    1521 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Feb  9 18:42:51.142112 kubelet[1521]: E0209 18:42:51.141986    1521 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.123.17b245fed8025b82", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.123", UID:"10.0.0.123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.123 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.123"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 42, 49, 574882178, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 42, 51, 52004407, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.123.17b245fed8025b82" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:42:51.539350 kubelet[1521]: E0209 18:42:51.539276    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:42:52.540251 kubelet[1521]: E0209 18:42:52.540198    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:42:52.551772 kubelet[1521]: W0209 18:42:52.551735    1521 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
Feb  9 18:42:52.551772 kubelet[1521]: E0209 18:42:52.551766    1521 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
Feb  9 18:42:52.559870 kubelet[1521]: E0209 18:42:52.559841    1521 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "10.0.0.123" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
Feb  9 18:42:52.654176 kubelet[1521]: I0209 18:42:52.654144    1521 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.123"
Feb  9 18:42:52.657755 kubelet[1521]: E0209 18:42:52.657676    1521 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.123.17b245fed8021252", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.123", UID:"10.0.0.123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.123 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.123"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 42, 49, 574863442, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 42, 52, 654110958, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.123.17b245fed8021252" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:42:52.658308 kubelet[1521]: E0209 18:42:52.658274    1521 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.123"
Feb  9 18:42:52.658732 kubelet[1521]: E0209 18:42:52.658642    1521 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.123.17b245fed8024d07", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.123", UID:"10.0.0.123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.123 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.123"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 42, 49, 574878471, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 42, 52, 654116031, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.123.17b245fed8024d07" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:42:52.659728 kubelet[1521]: E0209 18:42:52.659665    1521 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.123.17b245fed8025b82", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.123", UID:"10.0.0.123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.123 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.123"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 42, 49, 574882178, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 42, 52, 654118448, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.123.17b245fed8025b82" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:42:52.724099 kubelet[1521]: W0209 18:42:52.724076    1521 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.123" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Feb  9 18:42:52.724239 kubelet[1521]: E0209 18:42:52.724227    1521 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.123" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Feb  9 18:42:52.960963 kubelet[1521]: W0209 18:42:52.960872    1521 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Feb  9 18:42:52.960963 kubelet[1521]: E0209 18:42:52.960910    1521 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Feb  9 18:42:53.540874 kubelet[1521]: E0209 18:42:53.540838    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:42:53.635370 kubelet[1521]: W0209 18:42:53.635341    1521 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Feb  9 18:42:53.635513 kubelet[1521]: E0209 18:42:53.635502    1521 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Feb  9 18:42:54.542215 kubelet[1521]: E0209 18:42:54.542180    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:42:55.543568 kubelet[1521]: E0209 18:42:55.543529    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:42:55.761630 kubelet[1521]: E0209 18:42:55.761578    1521 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "10.0.0.123" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
Feb  9 18:42:55.859764 kubelet[1521]: I0209 18:42:55.859665    1521 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.123"
Feb  9 18:42:55.861177 kubelet[1521]: E0209 18:42:55.861099    1521 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.123.17b245fed8021252", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.123", UID:"10.0.0.123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.123 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.123"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 42, 49, 574863442, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 42, 55, 859630275, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.123.17b245fed8021252" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:42:55.861818 kubelet[1521]: E0209 18:42:55.861798    1521 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.123"
Feb  9 18:42:55.862290 kubelet[1521]: E0209 18:42:55.862231    1521 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.123.17b245fed8024d07", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.123", UID:"10.0.0.123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.123 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.123"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 42, 49, 574878471, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 42, 55, 859635681, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.123.17b245fed8024d07" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:42:55.863201 kubelet[1521]: E0209 18:42:55.863146    1521 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.123.17b245fed8025b82", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.123", UID:"10.0.0.123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.123 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.123"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 42, 49, 574882178, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 42, 55, 859638344, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.123.17b245fed8025b82" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:42:56.272000 kubelet[1521]: W0209 18:42:56.271902    1521 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
Feb  9 18:42:56.272000 kubelet[1521]: E0209 18:42:56.271935    1521 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
Feb  9 18:42:56.545093 kubelet[1521]: E0209 18:42:56.545002    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:42:57.546438 kubelet[1521]: E0209 18:42:57.546402    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:42:57.741583 kubelet[1521]: W0209 18:42:57.741530    1521 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.123" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Feb  9 18:42:57.741583 kubelet[1521]: E0209 18:42:57.741567    1521 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.123" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Feb  9 18:42:58.547617 kubelet[1521]: E0209 18:42:58.547547    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:42:58.720630 kubelet[1521]: W0209 18:42:58.720603    1521 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Feb  9 18:42:58.720770 kubelet[1521]: E0209 18:42:58.720759    1521 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Feb  9 18:42:59.353056 kubelet[1521]: W0209 18:42:59.353014    1521 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Feb  9 18:42:59.353056 kubelet[1521]: E0209 18:42:59.353051    1521 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Feb  9 18:42:59.526644 kubelet[1521]: I0209 18:42:59.526603    1521 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials"
Feb  9 18:42:59.548249 kubelet[1521]: E0209 18:42:59.548212    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:42:59.585026 kubelet[1521]: E0209 18:42:59.584981    1521 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.123\" not found"
Feb  9 18:42:59.895659 kubelet[1521]: E0209 18:42:59.895613    1521 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.123" not found
Feb  9 18:43:00.549252 kubelet[1521]: E0209 18:43:00.549198    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:00.963228 kubelet[1521]: E0209 18:43:00.960136    1521 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.123" not found
Feb  9 18:43:01.549685 kubelet[1521]: E0209 18:43:01.549629    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:02.169517 kubelet[1521]: E0209 18:43:02.169461    1521 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.123\" not found" node="10.0.0.123"
Feb  9 18:43:02.263437 kubelet[1521]: I0209 18:43:02.263401    1521 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.123"
Feb  9 18:43:02.358990 kubelet[1521]: I0209 18:43:02.358901    1521 kubelet_node_status.go:73] "Successfully registered node" node="10.0.0.123"
Feb  9 18:43:02.370532 kubelet[1521]: E0209 18:43:02.370463    1521 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.123\" not found"
Feb  9 18:43:02.471106 kubelet[1521]: E0209 18:43:02.470968    1521 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.123\" not found"
Feb  9 18:43:02.550779 kubelet[1521]: E0209 18:43:02.550724    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:02.571747 kubelet[1521]: E0209 18:43:02.571696    1521 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.123\" not found"
Feb  9 18:43:02.672920 kubelet[1521]: E0209 18:43:02.672886    1521 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.123\" not found"
Feb  9 18:43:02.674594 sudo[1332]: pam_unix(sudo:session): session closed for user root
Feb  9 18:43:02.676740 sshd[1327]: pam_unix(sshd:session): session closed for user core
Feb  9 18:43:02.679000 systemd[1]: sshd@4-10.0.0.123:22-10.0.0.1:33084.service: Deactivated successfully.
Feb  9 18:43:02.680552 systemd[1]: session-5.scope: Deactivated successfully.
Feb  9 18:43:02.681287 systemd-logind[1204]: Session 5 logged out. Waiting for processes to exit.
Feb  9 18:43:02.684906 systemd-logind[1204]: Removed session 5.
Feb  9 18:43:02.773199 kubelet[1521]: E0209 18:43:02.773094    1521 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.123\" not found"
Feb  9 18:43:02.873628 kubelet[1521]: E0209 18:43:02.873554    1521 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.123\" not found"
Feb  9 18:43:02.974262 kubelet[1521]: E0209 18:43:02.974187    1521 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.123\" not found"
Feb  9 18:43:03.075161 kubelet[1521]: E0209 18:43:03.075106    1521 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.123\" not found"
Feb  9 18:43:03.176311 kubelet[1521]: E0209 18:43:03.176232    1521 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.123\" not found"
Feb  9 18:43:03.277250 kubelet[1521]: E0209 18:43:03.277196    1521 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.123\" not found"
Feb  9 18:43:03.378102 kubelet[1521]: E0209 18:43:03.377948    1521 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.123\" not found"
Feb  9 18:43:03.478580 kubelet[1521]: E0209 18:43:03.478502    1521 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.123\" not found"
Feb  9 18:43:03.551600 kubelet[1521]: E0209 18:43:03.551542    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:03.580887 kubelet[1521]: E0209 18:43:03.580825    1521 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.123\" not found"
Feb  9 18:43:03.681776 kubelet[1521]: E0209 18:43:03.681664    1521 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.123\" not found"
Feb  9 18:43:03.782429 kubelet[1521]: E0209 18:43:03.782381    1521 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.123\" not found"
Feb  9 18:43:03.882901 kubelet[1521]: E0209 18:43:03.882836    1521 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.123\" not found"
Feb  9 18:43:03.983398 kubelet[1521]: E0209 18:43:03.983263    1521 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.123\" not found"
Feb  9 18:43:04.083577 kubelet[1521]: E0209 18:43:04.083529    1521 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.123\" not found"
Feb  9 18:43:04.184163 kubelet[1521]: E0209 18:43:04.184104    1521 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.123\" not found"
Feb  9 18:43:04.284763 kubelet[1521]: E0209 18:43:04.284641    1521 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.123\" not found"
Feb  9 18:43:04.385259 kubelet[1521]: E0209 18:43:04.385212    1521 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.123\" not found"
Feb  9 18:43:04.485761 kubelet[1521]: E0209 18:43:04.485702    1521 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.123\" not found"
Feb  9 18:43:04.552626 kubelet[1521]: E0209 18:43:04.552508    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:04.585996 kubelet[1521]: E0209 18:43:04.585749    1521 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.123\" not found"
Feb  9 18:43:04.686805 kubelet[1521]: E0209 18:43:04.686752    1521 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.123\" not found"
Feb  9 18:43:04.787448 kubelet[1521]: E0209 18:43:04.787400    1521 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.123\" not found"
Feb  9 18:43:04.888593 kubelet[1521]: E0209 18:43:04.888408    1521 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.123\" not found"
Feb  9 18:43:04.988895 kubelet[1521]: E0209 18:43:04.988825    1521 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.123\" not found"
Feb  9 18:43:05.090302 kubelet[1521]: I0209 18:43:05.090256    1521 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24"
Feb  9 18:43:05.090561 env[1222]: time="2024-02-09T18:43:05.090500045Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Feb  9 18:43:05.090823 kubelet[1521]: I0209 18:43:05.090653    1521 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24"
Feb  9 18:43:05.547160 kubelet[1521]: I0209 18:43:05.547078    1521 apiserver.go:52] "Watching apiserver"
Feb  9 18:43:05.550324 kubelet[1521]: I0209 18:43:05.550280    1521 topology_manager.go:210] "Topology Admit Handler"
Feb  9 18:43:05.550396 kubelet[1521]: I0209 18:43:05.550347    1521 topology_manager.go:210] "Topology Admit Handler"
Feb  9 18:43:05.555553 kubelet[1521]: E0209 18:43:05.555512    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:05.645572 kubelet[1521]: I0209 18:43:05.645511    1521 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Feb  9 18:43:05.719990 kubelet[1521]: I0209 18:43:05.719938    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a7cd3c62-d2a9-42dc-940e-44be03fd2442-cilium-run\") pod \"cilium-t2wgb\" (UID: \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\") " pod="kube-system/cilium-t2wgb"
Feb  9 18:43:05.719990 kubelet[1521]: I0209 18:43:05.719975    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a7cd3c62-d2a9-42dc-940e-44be03fd2442-cilium-cgroup\") pod \"cilium-t2wgb\" (UID: \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\") " pod="kube-system/cilium-t2wgb"
Feb  9 18:43:05.719990 kubelet[1521]: I0209 18:43:05.720001    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7cd3c62-d2a9-42dc-940e-44be03fd2442-lib-modules\") pod \"cilium-t2wgb\" (UID: \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\") " pod="kube-system/cilium-t2wgb"
Feb  9 18:43:05.720180 kubelet[1521]: I0209 18:43:05.720122    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a7cd3c62-d2a9-42dc-940e-44be03fd2442-xtables-lock\") pod \"cilium-t2wgb\" (UID: \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\") " pod="kube-system/cilium-t2wgb"
Feb  9 18:43:05.720215 kubelet[1521]: I0209 18:43:05.720192    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a7cd3c62-d2a9-42dc-940e-44be03fd2442-cilium-config-path\") pod \"cilium-t2wgb\" (UID: \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\") " pod="kube-system/cilium-t2wgb"
Feb  9 18:43:05.720254 kubelet[1521]: I0209 18:43:05.720243    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a7cd3c62-d2a9-42dc-940e-44be03fd2442-host-proc-sys-net\") pod \"cilium-t2wgb\" (UID: \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\") " pod="kube-system/cilium-t2wgb"
Feb  9 18:43:05.720278 kubelet[1521]: I0209 18:43:05.720270    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f981298-bd22-4283-9311-4f5fb12a3030-lib-modules\") pod \"kube-proxy-hf8j7\" (UID: \"2f981298-bd22-4283-9311-4f5fb12a3030\") " pod="kube-system/kube-proxy-hf8j7"
Feb  9 18:43:05.720316 kubelet[1521]: I0209 18:43:05.720308    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzbfw\" (UniqueName: \"kubernetes.io/projected/2f981298-bd22-4283-9311-4f5fb12a3030-kube-api-access-bzbfw\") pod \"kube-proxy-hf8j7\" (UID: \"2f981298-bd22-4283-9311-4f5fb12a3030\") " pod="kube-system/kube-proxy-hf8j7"
Feb  9 18:43:05.720341 kubelet[1521]: I0209 18:43:05.720330    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a7cd3c62-d2a9-42dc-940e-44be03fd2442-hubble-tls\") pod \"cilium-t2wgb\" (UID: \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\") " pod="kube-system/cilium-t2wgb"
Feb  9 18:43:05.720405 kubelet[1521]: I0209 18:43:05.720351    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk299\" (UniqueName: \"kubernetes.io/projected/a7cd3c62-d2a9-42dc-940e-44be03fd2442-kube-api-access-wk299\") pod \"cilium-t2wgb\" (UID: \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\") " pod="kube-system/cilium-t2wgb"
Feb  9 18:43:05.720433 kubelet[1521]: I0209 18:43:05.720423    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a7cd3c62-d2a9-42dc-940e-44be03fd2442-cni-path\") pod \"cilium-t2wgb\" (UID: \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\") " pod="kube-system/cilium-t2wgb"
Feb  9 18:43:05.720466 kubelet[1521]: I0209 18:43:05.720457    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a7cd3c62-d2a9-42dc-940e-44be03fd2442-clustermesh-secrets\") pod \"cilium-t2wgb\" (UID: \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\") " pod="kube-system/cilium-t2wgb"
Feb  9 18:43:05.720502 kubelet[1521]: I0209 18:43:05.720493    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a7cd3c62-d2a9-42dc-940e-44be03fd2442-bpf-maps\") pod \"cilium-t2wgb\" (UID: \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\") " pod="kube-system/cilium-t2wgb"
Feb  9 18:43:05.720527 kubelet[1521]: I0209 18:43:05.720520    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a7cd3c62-d2a9-42dc-940e-44be03fd2442-hostproc\") pod \"cilium-t2wgb\" (UID: \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\") " pod="kube-system/cilium-t2wgb"
Feb  9 18:43:05.720550 kubelet[1521]: I0209 18:43:05.720545    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f981298-bd22-4283-9311-4f5fb12a3030-xtables-lock\") pod \"kube-proxy-hf8j7\" (UID: \"2f981298-bd22-4283-9311-4f5fb12a3030\") " pod="kube-system/kube-proxy-hf8j7"
Feb  9 18:43:05.720572 kubelet[1521]: I0209 18:43:05.720567    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a7cd3c62-d2a9-42dc-940e-44be03fd2442-host-proc-sys-kernel\") pod \"cilium-t2wgb\" (UID: \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\") " pod="kube-system/cilium-t2wgb"
Feb  9 18:43:05.720636 kubelet[1521]: I0209 18:43:05.720607    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2f981298-bd22-4283-9311-4f5fb12a3030-kube-proxy\") pod \"kube-proxy-hf8j7\" (UID: \"2f981298-bd22-4283-9311-4f5fb12a3030\") " pod="kube-system/kube-proxy-hf8j7"
Feb  9 18:43:05.720667 kubelet[1521]: I0209 18:43:05.720648    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a7cd3c62-d2a9-42dc-940e-44be03fd2442-etc-cni-netd\") pod \"cilium-t2wgb\" (UID: \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\") " pod="kube-system/cilium-t2wgb"
Feb  9 18:43:05.720705 kubelet[1521]: I0209 18:43:05.720696    1521 reconciler.go:41] "Reconciler: start to sync state"
Feb  9 18:43:05.854540 kubelet[1521]: E0209 18:43:05.854481    1521 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:43:05.857980 env[1222]: time="2024-02-09T18:43:05.855481067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hf8j7,Uid:2f981298-bd22-4283-9311-4f5fb12a3030,Namespace:kube-system,Attempt:0,}"
Feb  9 18:43:06.154596 kubelet[1521]: E0209 18:43:06.154489    1521 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:43:06.155085 env[1222]: time="2024-02-09T18:43:06.155044592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t2wgb,Uid:a7cd3c62-d2a9-42dc-940e-44be03fd2442,Namespace:kube-system,Attempt:0,}"
Feb  9 18:43:06.357037 env[1222]: time="2024-02-09T18:43:06.356997659Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:43:06.358106 env[1222]: time="2024-02-09T18:43:06.358078786Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:43:06.359923 env[1222]: time="2024-02-09T18:43:06.359888506Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:43:06.361337 env[1222]: time="2024-02-09T18:43:06.361296483Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:43:06.361998 env[1222]: time="2024-02-09T18:43:06.361961647Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:43:06.363812 env[1222]: time="2024-02-09T18:43:06.363776920Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:43:06.365199 env[1222]: time="2024-02-09T18:43:06.365173833Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:43:06.365964 env[1222]: time="2024-02-09T18:43:06.365933381Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:43:06.393659 env[1222]: time="2024-02-09T18:43:06.393597358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  9 18:43:06.393659 env[1222]: time="2024-02-09T18:43:06.393636063Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  9 18:43:06.393808 env[1222]: time="2024-02-09T18:43:06.393646688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  9 18:43:06.393984 env[1222]: time="2024-02-09T18:43:06.393930200Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/53b5485c552c1b29c24cc960a84348e82691c935ae40ee1e7d55e5e6e4c52fc1 pid=1624 runtime=io.containerd.runc.v2
Feb  9 18:43:06.394056 env[1222]: time="2024-02-09T18:43:06.393618848Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  9 18:43:06.394056 env[1222]: time="2024-02-09T18:43:06.393664462Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  9 18:43:06.394056 env[1222]: time="2024-02-09T18:43:06.393679401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  9 18:43:06.394056 env[1222]: time="2024-02-09T18:43:06.393830104Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a2f2cf55e45019f288363e6028d546151319a46fec3c6e0f178bac065027b86 pid=1623 runtime=io.containerd.runc.v2
Feb  9 18:43:06.463943 env[1222]: time="2024-02-09T18:43:06.463844319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t2wgb,Uid:a7cd3c62-d2a9-42dc-940e-44be03fd2442,Namespace:kube-system,Attempt:0,} returns sandbox id \"53b5485c552c1b29c24cc960a84348e82691c935ae40ee1e7d55e5e6e4c52fc1\""
Feb  9 18:43:06.466822 kubelet[1521]: E0209 18:43:06.466643    1521 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:43:06.469142 env[1222]: time="2024-02-09T18:43:06.469092659Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\""
Feb  9 18:43:06.469385 env[1222]: time="2024-02-09T18:43:06.469338426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hf8j7,Uid:2f981298-bd22-4283-9311-4f5fb12a3030,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a2f2cf55e45019f288363e6028d546151319a46fec3c6e0f178bac065027b86\""
Feb  9 18:43:06.469919 kubelet[1521]: E0209 18:43:06.469902    1521 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:43:06.556565 kubelet[1521]: E0209 18:43:06.556502    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:06.827634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2360225344.mount: Deactivated successfully.
Feb  9 18:43:07.556834 kubelet[1521]: E0209 18:43:07.556769    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:08.557668 kubelet[1521]: E0209 18:43:08.557610    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:09.538455 kubelet[1521]: E0209 18:43:09.538411    1521 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:09.557783 kubelet[1521]: E0209 18:43:09.557746    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:09.927624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2833068519.mount: Deactivated successfully.
Feb  9 18:43:10.558204 kubelet[1521]: E0209 18:43:10.558155    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:11.559224 kubelet[1521]: E0209 18:43:11.559173    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:12.169650 env[1222]: time="2024-02-09T18:43:12.169602820Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:43:12.171359 env[1222]: time="2024-02-09T18:43:12.171319354Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:43:12.172927 env[1222]: time="2024-02-09T18:43:12.172897057Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:43:12.173466 env[1222]: time="2024-02-09T18:43:12.173431633Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\""
Feb  9 18:43:12.174404 env[1222]: time="2024-02-09T18:43:12.174381861Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\""
Feb  9 18:43:12.175577 env[1222]: time="2024-02-09T18:43:12.175549109Z" level=info msg="CreateContainer within sandbox \"53b5485c552c1b29c24cc960a84348e82691c935ae40ee1e7d55e5e6e4c52fc1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Feb  9 18:43:12.186496 env[1222]: time="2024-02-09T18:43:12.186459240Z" level=info msg="CreateContainer within sandbox \"53b5485c552c1b29c24cc960a84348e82691c935ae40ee1e7d55e5e6e4c52fc1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1063ac6b2abedfa80036ebcc90232301bd1b5743f19e138b75d2edd89adb8dce\""
Feb  9 18:43:12.187089 env[1222]: time="2024-02-09T18:43:12.187063011Z" level=info msg="StartContainer for \"1063ac6b2abedfa80036ebcc90232301bd1b5743f19e138b75d2edd89adb8dce\""
Feb  9 18:43:12.245923 env[1222]: time="2024-02-09T18:43:12.245877720Z" level=info msg="StartContainer for \"1063ac6b2abedfa80036ebcc90232301bd1b5743f19e138b75d2edd89adb8dce\" returns successfully"
Feb  9 18:43:12.413371 env[1222]: time="2024-02-09T18:43:12.413328560Z" level=info msg="shim disconnected" id=1063ac6b2abedfa80036ebcc90232301bd1b5743f19e138b75d2edd89adb8dce
Feb  9 18:43:12.413686 env[1222]: time="2024-02-09T18:43:12.413666383Z" level=warning msg="cleaning up after shim disconnected" id=1063ac6b2abedfa80036ebcc90232301bd1b5743f19e138b75d2edd89adb8dce namespace=k8s.io
Feb  9 18:43:12.413806 env[1222]: time="2024-02-09T18:43:12.413780909Z" level=info msg="cleaning up dead shim"
Feb  9 18:43:12.420370 env[1222]: time="2024-02-09T18:43:12.420264172Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:43:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1740 runtime=io.containerd.runc.v2\n"
Feb  9 18:43:12.559337 kubelet[1521]: E0209 18:43:12.559285    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:12.742049 kubelet[1521]: E0209 18:43:12.741894    1521 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:43:12.743978 env[1222]: time="2024-02-09T18:43:12.743938926Z" level=info msg="CreateContainer within sandbox \"53b5485c552c1b29c24cc960a84348e82691c935ae40ee1e7d55e5e6e4c52fc1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Feb  9 18:43:12.753343 env[1222]: time="2024-02-09T18:43:12.753290342Z" level=info msg="CreateContainer within sandbox \"53b5485c552c1b29c24cc960a84348e82691c935ae40ee1e7d55e5e6e4c52fc1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bca7c328f9d04a619273ca39b207bca6eebcadcb68778f8d660fa807ffb80ec3\""
Feb  9 18:43:12.754082 env[1222]: time="2024-02-09T18:43:12.754047454Z" level=info msg="StartContainer for \"bca7c328f9d04a619273ca39b207bca6eebcadcb68778f8d660fa807ffb80ec3\""
Feb  9 18:43:12.804402 env[1222]: time="2024-02-09T18:43:12.804357122Z" level=info msg="StartContainer for \"bca7c328f9d04a619273ca39b207bca6eebcadcb68778f8d660fa807ffb80ec3\" returns successfully"
Feb  9 18:43:12.819331 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb  9 18:43:12.819598 systemd[1]: Stopped systemd-sysctl.service.
Feb  9 18:43:12.819770 systemd[1]: Stopping systemd-sysctl.service...
Feb  9 18:43:12.821504 systemd[1]: Starting systemd-sysctl.service...
Feb  9 18:43:12.831701 systemd[1]: Finished systemd-sysctl.service.
Feb  9 18:43:12.850539 env[1222]: time="2024-02-09T18:43:12.850497516Z" level=info msg="shim disconnected" id=bca7c328f9d04a619273ca39b207bca6eebcadcb68778f8d660fa807ffb80ec3
Feb  9 18:43:12.850825 env[1222]: time="2024-02-09T18:43:12.850804039Z" level=warning msg="cleaning up after shim disconnected" id=bca7c328f9d04a619273ca39b207bca6eebcadcb68778f8d660fa807ffb80ec3 namespace=k8s.io
Feb  9 18:43:12.850910 env[1222]: time="2024-02-09T18:43:12.850894301Z" level=info msg="cleaning up dead shim"
Feb  9 18:43:12.857911 env[1222]: time="2024-02-09T18:43:12.857876682Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:43:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1805 runtime=io.containerd.runc.v2\n"
Feb  9 18:43:13.182518 systemd[1]: run-containerd-runc-k8s.io-1063ac6b2abedfa80036ebcc90232301bd1b5743f19e138b75d2edd89adb8dce-runc.x9ZUVR.mount: Deactivated successfully.
Feb  9 18:43:13.182670 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1063ac6b2abedfa80036ebcc90232301bd1b5743f19e138b75d2edd89adb8dce-rootfs.mount: Deactivated successfully.
Feb  9 18:43:13.320026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1430716579.mount: Deactivated successfully.
Feb  9 18:43:13.560441 kubelet[1521]: E0209 18:43:13.560338    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:13.661223 env[1222]: time="2024-02-09T18:43:13.661176121Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:43:13.662828 env[1222]: time="2024-02-09T18:43:13.662774580Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:43:13.664419 env[1222]: time="2024-02-09T18:43:13.664391308Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:43:13.666057 env[1222]: time="2024-02-09T18:43:13.666022589Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:43:13.666584 env[1222]: time="2024-02-09T18:43:13.666545374Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926\""
Feb  9 18:43:13.668601 env[1222]: time="2024-02-09T18:43:13.668569233Z" level=info msg="CreateContainer within sandbox \"0a2f2cf55e45019f288363e6028d546151319a46fec3c6e0f178bac065027b86\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Feb  9 18:43:13.681507 env[1222]: time="2024-02-09T18:43:13.681443856Z" level=info msg="CreateContainer within sandbox \"0a2f2cf55e45019f288363e6028d546151319a46fec3c6e0f178bac065027b86\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"21149e85cbc9af1f8b5e35f18e2d22cd56be4655dd11dd727fbf6d6816cfcff0\""
Feb  9 18:43:13.682062 env[1222]: time="2024-02-09T18:43:13.682012736Z" level=info msg="StartContainer for \"21149e85cbc9af1f8b5e35f18e2d22cd56be4655dd11dd727fbf6d6816cfcff0\""
Feb  9 18:43:13.744822 env[1222]: time="2024-02-09T18:43:13.742817542Z" level=info msg="StartContainer for \"21149e85cbc9af1f8b5e35f18e2d22cd56be4655dd11dd727fbf6d6816cfcff0\" returns successfully"
Feb  9 18:43:13.745154 kubelet[1521]: E0209 18:43:13.745128    1521 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:43:13.747081 env[1222]: time="2024-02-09T18:43:13.747044159Z" level=info msg="CreateContainer within sandbox \"53b5485c552c1b29c24cc960a84348e82691c935ae40ee1e7d55e5e6e4c52fc1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Feb  9 18:43:13.747473 kubelet[1521]: E0209 18:43:13.747432    1521 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:43:13.769022 env[1222]: time="2024-02-09T18:43:13.768975757Z" level=info msg="CreateContainer within sandbox \"53b5485c552c1b29c24cc960a84348e82691c935ae40ee1e7d55e5e6e4c52fc1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b07ac15840e8880c634ed67191ba77177b7099ad238b136ff36977b354e3567e\""
Feb  9 18:43:13.769737 kubelet[1521]: I0209 18:43:13.769662    1521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-hf8j7" podStartSLOduration=-9.223372025085163e+09 pod.CreationTimestamp="2024-02-09 18:43:02 +0000 UTC" firstStartedPulling="2024-02-09 18:43:06.470221437 +0000 UTC m=+17.807568674" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:43:13.769607801 +0000 UTC m=+25.106955038" watchObservedRunningTime="2024-02-09 18:43:13.769613478 +0000 UTC m=+25.106960715"
Feb  9 18:43:13.770219 env[1222]: time="2024-02-09T18:43:13.770188354Z" level=info msg="StartContainer for \"b07ac15840e8880c634ed67191ba77177b7099ad238b136ff36977b354e3567e\""
Feb  9 18:43:13.839798 env[1222]: time="2024-02-09T18:43:13.839750624Z" level=info msg="StartContainer for \"b07ac15840e8880c634ed67191ba77177b7099ad238b136ff36977b354e3567e\" returns successfully"
Feb  9 18:43:13.950616 env[1222]: time="2024-02-09T18:43:13.950573876Z" level=info msg="shim disconnected" id=b07ac15840e8880c634ed67191ba77177b7099ad238b136ff36977b354e3567e
Feb  9 18:43:13.951168 env[1222]: time="2024-02-09T18:43:13.951135839Z" level=warning msg="cleaning up after shim disconnected" id=b07ac15840e8880c634ed67191ba77177b7099ad238b136ff36977b354e3567e namespace=k8s.io
Feb  9 18:43:13.951249 env[1222]: time="2024-02-09T18:43:13.951235543Z" level=info msg="cleaning up dead shim"
Feb  9 18:43:13.959877 env[1222]: time="2024-02-09T18:43:13.959850767Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:43:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1929 runtime=io.containerd.runc.v2\n"
Feb  9 18:43:14.560560 kubelet[1521]: E0209 18:43:14.560509    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:14.750057 kubelet[1521]: E0209 18:43:14.750023    1521 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:43:14.750201 kubelet[1521]: E0209 18:43:14.750088    1521 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:43:14.751769 env[1222]: time="2024-02-09T18:43:14.751730865Z" level=info msg="CreateContainer within sandbox \"53b5485c552c1b29c24cc960a84348e82691c935ae40ee1e7d55e5e6e4c52fc1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Feb  9 18:43:14.763889 env[1222]: time="2024-02-09T18:43:14.763844890Z" level=info msg="CreateContainer within sandbox \"53b5485c552c1b29c24cc960a84348e82691c935ae40ee1e7d55e5e6e4c52fc1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a5a3a2ed92c661aba3a0ffeb6e419885e44b9fa7068756fa41ca62b8ef76b4c1\""
Feb  9 18:43:14.765521 env[1222]: time="2024-02-09T18:43:14.765496636Z" level=info msg="StartContainer for \"a5a3a2ed92c661aba3a0ffeb6e419885e44b9fa7068756fa41ca62b8ef76b4c1\""
Feb  9 18:43:14.824806 env[1222]: time="2024-02-09T18:43:14.824752852Z" level=info msg="StartContainer for \"a5a3a2ed92c661aba3a0ffeb6e419885e44b9fa7068756fa41ca62b8ef76b4c1\" returns successfully"
Feb  9 18:43:14.840151 env[1222]: time="2024-02-09T18:43:14.840107359Z" level=info msg="shim disconnected" id=a5a3a2ed92c661aba3a0ffeb6e419885e44b9fa7068756fa41ca62b8ef76b4c1
Feb  9 18:43:14.840368 env[1222]: time="2024-02-09T18:43:14.840338325Z" level=warning msg="cleaning up after shim disconnected" id=a5a3a2ed92c661aba3a0ffeb6e419885e44b9fa7068756fa41ca62b8ef76b4c1 namespace=k8s.io
Feb  9 18:43:14.840430 env[1222]: time="2024-02-09T18:43:14.840416687Z" level=info msg="cleaning up dead shim"
Feb  9 18:43:14.848275 env[1222]: time="2024-02-09T18:43:14.848238949Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:43:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2087 runtime=io.containerd.runc.v2\n"
Feb  9 18:43:15.181886 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5a3a2ed92c661aba3a0ffeb6e419885e44b9fa7068756fa41ca62b8ef76b4c1-rootfs.mount: Deactivated successfully.
Feb  9 18:43:15.561569 kubelet[1521]: E0209 18:43:15.561447    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:15.753941 kubelet[1521]: E0209 18:43:15.753919    1521 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:43:15.760225 env[1222]: time="2024-02-09T18:43:15.760177844Z" level=info msg="CreateContainer within sandbox \"53b5485c552c1b29c24cc960a84348e82691c935ae40ee1e7d55e5e6e4c52fc1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Feb  9 18:43:15.769192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2030517711.mount: Deactivated successfully.
Feb  9 18:43:15.772889 env[1222]: time="2024-02-09T18:43:15.772768946Z" level=info msg="CreateContainer within sandbox \"53b5485c552c1b29c24cc960a84348e82691c935ae40ee1e7d55e5e6e4c52fc1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"876b3e0c46db7596f3f3fc4a783ed437170c05365bb27b0caefdd25039e75a0c\""
Feb  9 18:43:15.773398 env[1222]: time="2024-02-09T18:43:15.773367772Z" level=info msg="StartContainer for \"876b3e0c46db7596f3f3fc4a783ed437170c05365bb27b0caefdd25039e75a0c\""
Feb  9 18:43:15.829823 env[1222]: time="2024-02-09T18:43:15.829737431Z" level=info msg="StartContainer for \"876b3e0c46db7596f3f3fc4a783ed437170c05365bb27b0caefdd25039e75a0c\" returns successfully"
Feb  9 18:43:15.972716 kubelet[1521]: I0209 18:43:15.972680    1521 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
Feb  9 18:43:16.256824 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks!
Feb  9 18:43:16.491824 kernel: Initializing XFRM netlink socket
Feb  9 18:43:16.494814 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks!
Feb  9 18:43:16.562412 kubelet[1521]: E0209 18:43:16.562356    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:16.761725 kubelet[1521]: E0209 18:43:16.761505    1521 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:43:16.776007 kubelet[1521]: I0209 18:43:16.775969    1521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-t2wgb" podStartSLOduration=-9.22337202207885e+09 pod.CreationTimestamp="2024-02-09 18:43:02 +0000 UTC" firstStartedPulling="2024-02-09 18:43:06.467180486 +0000 UTC m=+17.804527683" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:43:16.775242491 +0000 UTC m=+28.112589728" watchObservedRunningTime="2024-02-09 18:43:16.775926137 +0000 UTC m=+28.113273374"
Feb  9 18:43:17.563323 kubelet[1521]: E0209 18:43:17.563272    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:17.763004 kubelet[1521]: E0209 18:43:17.762969    1521 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:43:18.105600 systemd-networkd[1103]: cilium_host: Link UP
Feb  9 18:43:18.105699 systemd-networkd[1103]: cilium_net: Link UP
Feb  9 18:43:18.105702 systemd-networkd[1103]: cilium_net: Gained carrier
Feb  9 18:43:18.105825 systemd-networkd[1103]: cilium_host: Gained carrier
Feb  9 18:43:18.107921 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready
Feb  9 18:43:18.108073 systemd-networkd[1103]: cilium_host: Gained IPv6LL
Feb  9 18:43:18.178889 systemd-networkd[1103]: cilium_vxlan: Link UP
Feb  9 18:43:18.178896 systemd-networkd[1103]: cilium_vxlan: Gained carrier
Feb  9 18:43:18.464822 kernel: NET: Registered PF_ALG protocol family
Feb  9 18:43:18.513103 systemd-networkd[1103]: cilium_net: Gained IPv6LL
Feb  9 18:43:18.563561 kubelet[1521]: E0209 18:43:18.563481    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:18.764805 kubelet[1521]: E0209 18:43:18.764334    1521 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:43:19.001844 systemd-networkd[1103]: lxc_health: Link UP
Feb  9 18:43:19.015162 systemd-networkd[1103]: lxc_health: Gained carrier
Feb  9 18:43:19.016038 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready
Feb  9 18:43:19.242120 kubelet[1521]: I0209 18:43:19.242079    1521 topology_manager.go:210] "Topology Admit Handler"
Feb  9 18:43:19.394261 kubelet[1521]: I0209 18:43:19.394219    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bg5x\" (UniqueName: \"kubernetes.io/projected/9cf8fa6b-7c0f-4997-a1e8-cfb2d84179ca-kube-api-access-5bg5x\") pod \"nginx-deployment-8ffc5cf85-j55sr\" (UID: \"9cf8fa6b-7c0f-4997-a1e8-cfb2d84179ca\") " pod="default/nginx-deployment-8ffc5cf85-j55sr"
Feb  9 18:43:19.545492 env[1222]: time="2024-02-09T18:43:19.545279417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-j55sr,Uid:9cf8fa6b-7c0f-4997-a1e8-cfb2d84179ca,Namespace:default,Attempt:0,}"
Feb  9 18:43:19.563963 kubelet[1521]: E0209 18:43:19.563935    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:19.580168 systemd-networkd[1103]: lxc04650797510c: Link UP
Feb  9 18:43:19.590859 kernel: eth0: renamed from tmp2a301
Feb  9 18:43:19.598313 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Feb  9 18:43:19.598364 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc04650797510c: link becomes ready
Feb  9 18:43:19.598393 systemd-networkd[1103]: lxc04650797510c: Gained carrier
Feb  9 18:43:20.156666 kubelet[1521]: E0209 18:43:20.156640    1521 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:43:20.169147 systemd-networkd[1103]: cilium_vxlan: Gained IPv6LL
Feb  9 18:43:20.233103 systemd-networkd[1103]: lxc_health: Gained IPv6LL
Feb  9 18:43:20.565114 kubelet[1521]: E0209 18:43:20.564993    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:21.384385 kubelet[1521]: I0209 18:43:21.384344    1521 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
Feb  9 18:43:21.385111 kubelet[1521]: E0209 18:43:21.385080    1521 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:43:21.565931 kubelet[1521]: E0209 18:43:21.565895    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:21.577240 systemd-networkd[1103]: lxc04650797510c: Gained IPv6LL
Feb  9 18:43:21.768227 kubelet[1521]: E0209 18:43:21.768137    1521 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:43:22.566382 kubelet[1521]: E0209 18:43:22.566340    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:23.070242 env[1222]: time="2024-02-09T18:43:23.070163806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  9 18:43:23.070242 env[1222]: time="2024-02-09T18:43:23.070220568Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  9 18:43:23.070242 env[1222]: time="2024-02-09T18:43:23.070231649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  9 18:43:23.070748 env[1222]: time="2024-02-09T18:43:23.070709231Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2a30173d21a190507fc3805dfe6eb9a2adb072ac6877fa2aff62c42eaa138eff pid=2614 runtime=io.containerd.runc.v2
Feb  9 18:43:23.084562 systemd[1]: run-containerd-runc-k8s.io-2a30173d21a190507fc3805dfe6eb9a2adb072ac6877fa2aff62c42eaa138eff-runc.jvimJd.mount: Deactivated successfully.
Feb  9 18:43:23.127104 systemd-resolved[1158]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Feb  9 18:43:23.145578 env[1222]: time="2024-02-09T18:43:23.145537809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-j55sr,Uid:9cf8fa6b-7c0f-4997-a1e8-cfb2d84179ca,Namespace:default,Attempt:0,} returns sandbox id \"2a30173d21a190507fc3805dfe6eb9a2adb072ac6877fa2aff62c42eaa138eff\""
Feb  9 18:43:23.146988 env[1222]: time="2024-02-09T18:43:23.146938913Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\""
Feb  9 18:43:23.567450 kubelet[1521]: E0209 18:43:23.567408    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:24.568061 kubelet[1521]: E0209 18:43:24.568010    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:25.183959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1061330893.mount: Deactivated successfully.
Feb  9 18:43:25.569041 kubelet[1521]: E0209 18:43:25.568830    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:25.896371 env[1222]: time="2024-02-09T18:43:25.895716118Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:43:25.898234 env[1222]: time="2024-02-09T18:43:25.898197940Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:43:25.899907 env[1222]: time="2024-02-09T18:43:25.899880049Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:43:25.901843 env[1222]: time="2024-02-09T18:43:25.901813689Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:43:25.902564 env[1222]: time="2024-02-09T18:43:25.902535758Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\""
Feb  9 18:43:25.904492 env[1222]: time="2024-02-09T18:43:25.904458797Z" level=info msg="CreateContainer within sandbox \"2a30173d21a190507fc3805dfe6eb9a2adb072ac6877fa2aff62c42eaa138eff\" for container &ContainerMetadata{Name:nginx,Attempt:0,}"
Feb  9 18:43:25.912741 env[1222]: time="2024-02-09T18:43:25.912702776Z" level=info msg="CreateContainer within sandbox \"2a30173d21a190507fc3805dfe6eb9a2adb072ac6877fa2aff62c42eaa138eff\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"0f36f1f37f064f6ea3f743fd19e3e9f374603e2341c95d88f186d445f58468d5\""
Feb  9 18:43:25.913135 env[1222]: time="2024-02-09T18:43:25.913088752Z" level=info msg="StartContainer for \"0f36f1f37f064f6ea3f743fd19e3e9f374603e2341c95d88f186d445f58468d5\""
Feb  9 18:43:25.967651 env[1222]: time="2024-02-09T18:43:25.967608950Z" level=info msg="StartContainer for \"0f36f1f37f064f6ea3f743fd19e3e9f374603e2341c95d88f186d445f58468d5\" returns successfully"
Feb  9 18:43:26.569310 kubelet[1521]: E0209 18:43:26.569259    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:27.569929 kubelet[1521]: E0209 18:43:27.569886    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:28.018371 update_engine[1206]: I0209 18:43:28.018298  1206 update_attempter.cc:509] Updating boot flags...
Feb  9 18:43:28.570344 kubelet[1521]: E0209 18:43:28.570304    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:29.538120 kubelet[1521]: E0209 18:43:29.538074    1521 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:29.570562 kubelet[1521]: E0209 18:43:29.570505    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:30.571683 kubelet[1521]: E0209 18:43:30.571637    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:30.831399 kubelet[1521]: I0209 18:43:30.831301    1521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-8ffc5cf85-j55sr" podStartSLOduration=-9.22337202502351e+09 pod.CreationTimestamp="2024-02-09 18:43:19 +0000 UTC" firstStartedPulling="2024-02-09 18:43:23.14665922 +0000 UTC m=+34.484006457" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:43:26.783181997 +0000 UTC m=+38.120529234" watchObservedRunningTime="2024-02-09 18:43:30.831265908 +0000 UTC m=+42.168613145"
Feb  9 18:43:30.831399 kubelet[1521]: I0209 18:43:30.831393    1521 topology_manager.go:210] "Topology Admit Handler"
Feb  9 18:43:30.952079 kubelet[1521]: I0209 18:43:30.952041    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n9nj\" (UniqueName: \"kubernetes.io/projected/59949d56-a67f-4177-8638-38e1a34a7996-kube-api-access-5n9nj\") pod \"nfs-server-provisioner-0\" (UID: \"59949d56-a67f-4177-8638-38e1a34a7996\") " pod="default/nfs-server-provisioner-0"
Feb  9 18:43:30.952297 kubelet[1521]: I0209 18:43:30.952283    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/59949d56-a67f-4177-8638-38e1a34a7996-data\") pod \"nfs-server-provisioner-0\" (UID: \"59949d56-a67f-4177-8638-38e1a34a7996\") " pod="default/nfs-server-provisioner-0"
Feb  9 18:43:31.135589 env[1222]: time="2024-02-09T18:43:31.135033200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:59949d56-a67f-4177-8638-38e1a34a7996,Namespace:default,Attempt:0,}"
Feb  9 18:43:31.160430 systemd-networkd[1103]: lxc3dcd4f7581c2: Link UP
Feb  9 18:43:31.170823 kernel: eth0: renamed from tmpfe911
Feb  9 18:43:31.179313 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Feb  9 18:43:31.179371 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc3dcd4f7581c2: link becomes ready
Feb  9 18:43:31.179399 systemd-networkd[1103]: lxc3dcd4f7581c2: Gained carrier
Feb  9 18:43:31.400923 env[1222]: time="2024-02-09T18:43:31.400626043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  9 18:43:31.401050 env[1222]: time="2024-02-09T18:43:31.400670605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  9 18:43:31.401050 env[1222]: time="2024-02-09T18:43:31.400680885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  9 18:43:31.401050 env[1222]: time="2024-02-09T18:43:31.400828089Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe911dc87d98cf3ed59395a51a0beb1fee7a9c6c0bbf8f1a6cf43a2ff6b4b034 pid=2800 runtime=io.containerd.runc.v2
Feb  9 18:43:31.429086 systemd-resolved[1158]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Feb  9 18:43:31.444730 env[1222]: time="2024-02-09T18:43:31.444685658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:59949d56-a67f-4177-8638-38e1a34a7996,Namespace:default,Attempt:0,} returns sandbox id \"fe911dc87d98cf3ed59395a51a0beb1fee7a9c6c0bbf8f1a6cf43a2ff6b4b034\""
Feb  9 18:43:31.446121 env[1222]: time="2024-02-09T18:43:31.446089340Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\""
Feb  9 18:43:31.572744 kubelet[1521]: E0209 18:43:31.572708    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:32.573652 kubelet[1521]: E0209 18:43:32.573602    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:33.160983 systemd-networkd[1103]: lxc3dcd4f7581c2: Gained IPv6LL
Feb  9 18:43:33.475567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3067196400.mount: Deactivated successfully.
Feb  9 18:43:33.574643 kubelet[1521]: E0209 18:43:33.574594    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:34.575217 kubelet[1521]: E0209 18:43:34.575156    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:35.261361 env[1222]: time="2024-02-09T18:43:35.261299301Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:43:35.262995 env[1222]: time="2024-02-09T18:43:35.262965783Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:43:35.264665 env[1222]: time="2024-02-09T18:43:35.264637025Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:43:35.266127 env[1222]: time="2024-02-09T18:43:35.266098702Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:43:35.266883 env[1222]: time="2024-02-09T18:43:35.266855120Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\""
Feb  9 18:43:35.269363 env[1222]: time="2024-02-09T18:43:35.269323982Z" level=info msg="CreateContainer within sandbox \"fe911dc87d98cf3ed59395a51a0beb1fee7a9c6c0bbf8f1a6cf43a2ff6b4b034\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}"
Feb  9 18:43:35.277169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1838198611.mount: Deactivated successfully.
Feb  9 18:43:35.281132 env[1222]: time="2024-02-09T18:43:35.281100038Z" level=info msg="CreateContainer within sandbox \"fe911dc87d98cf3ed59395a51a0beb1fee7a9c6c0bbf8f1a6cf43a2ff6b4b034\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"462ec5d250e50053fe8aa1026d37ff28b9c7adeb577e27a4b1bbf915f2d93ea7\""
Feb  9 18:43:35.281729 env[1222]: time="2024-02-09T18:43:35.281706373Z" level=info msg="StartContainer for \"462ec5d250e50053fe8aa1026d37ff28b9c7adeb577e27a4b1bbf915f2d93ea7\""
Feb  9 18:43:35.339975 env[1222]: time="2024-02-09T18:43:35.339930035Z" level=info msg="StartContainer for \"462ec5d250e50053fe8aa1026d37ff28b9c7adeb577e27a4b1bbf915f2d93ea7\" returns successfully"
Feb  9 18:43:35.575910 kubelet[1521]: E0209 18:43:35.575871    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:36.576836 kubelet[1521]: E0209 18:43:36.576756    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:37.577144 kubelet[1521]: E0209 18:43:37.577092    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:38.577540 kubelet[1521]: E0209 18:43:38.577500    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:39.578130 kubelet[1521]: E0209 18:43:39.578070    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:40.578746 kubelet[1521]: E0209 18:43:40.578693    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:41.578931 kubelet[1521]: E0209 18:43:41.578875    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:42.579669 kubelet[1521]: E0209 18:43:42.579621    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:43.579966 kubelet[1521]: E0209 18:43:43.579916    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:44.580837 kubelet[1521]: E0209 18:43:44.580777    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:45.261867 kubelet[1521]: I0209 18:43:45.261821    1521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=-9.223372021593012e+09 pod.CreationTimestamp="2024-02-09 18:43:30 +0000 UTC" firstStartedPulling="2024-02-09 18:43:31.445679048 +0000 UTC m=+42.783026285" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:43:35.800542556 +0000 UTC m=+47.137889793" watchObservedRunningTime="2024-02-09 18:43:45.261764755 +0000 UTC m=+56.599111952"
Feb  9 18:43:45.262067 kubelet[1521]: I0209 18:43:45.261968    1521 topology_manager.go:210] "Topology Admit Handler"
Feb  9 18:43:45.418609 kubelet[1521]: I0209 18:43:45.418569    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2ff15e2f-8111-4781-a3b5-d24cdb0030f2\" (UniqueName: \"kubernetes.io/nfs/fb40ebff-fc74-4001-884a-1693b173da76-pvc-2ff15e2f-8111-4781-a3b5-d24cdb0030f2\") pod \"test-pod-1\" (UID: \"fb40ebff-fc74-4001-884a-1693b173da76\") " pod="default/test-pod-1"
Feb  9 18:43:45.418609 kubelet[1521]: I0209 18:43:45.418610    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pflwh\" (UniqueName: \"kubernetes.io/projected/fb40ebff-fc74-4001-884a-1693b173da76-kube-api-access-pflwh\") pod \"test-pod-1\" (UID: \"fb40ebff-fc74-4001-884a-1693b173da76\") " pod="default/test-pod-1"
Feb  9 18:43:45.539818 kernel: FS-Cache: Loaded
Feb  9 18:43:45.569069 kernel: RPC: Registered named UNIX socket transport module.
Feb  9 18:43:45.569183 kernel: RPC: Registered udp transport module.
Feb  9 18:43:45.570311 kernel: RPC: Registered tcp transport module.
Feb  9 18:43:45.570362 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Feb  9 18:43:45.581614 kubelet[1521]: E0209 18:43:45.581576    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:45.606822 kernel: FS-Cache: Netfs 'nfs' registered for caching
Feb  9 18:43:45.735815 kernel: NFS: Registering the id_resolver key type
Feb  9 18:43:45.736013 kernel: Key type id_resolver registered
Feb  9 18:43:45.736035 kernel: Key type id_legacy registered
Feb  9 18:43:45.754746 nfsidmap[2943]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain'
Feb  9 18:43:45.757538 nfsidmap[2946]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain'
Feb  9 18:43:45.864540 env[1222]: time="2024-02-09T18:43:45.864504399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:fb40ebff-fc74-4001-884a-1693b173da76,Namespace:default,Attempt:0,}"
Feb  9 18:43:45.963033 systemd-networkd[1103]: lxc0e5b55d47317: Link UP
Feb  9 18:43:45.973836 kernel: eth0: renamed from tmp7aa53
Feb  9 18:43:45.982268 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Feb  9 18:43:45.982346 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc0e5b55d47317: link becomes ready
Feb  9 18:43:45.982360 systemd-networkd[1103]: lxc0e5b55d47317: Gained carrier
Feb  9 18:43:46.166281 env[1222]: time="2024-02-09T18:43:46.165896266Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  9 18:43:46.166281 env[1222]: time="2024-02-09T18:43:46.165988627Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  9 18:43:46.166281 env[1222]: time="2024-02-09T18:43:46.166018908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  9 18:43:46.166281 env[1222]: time="2024-02-09T18:43:46.166213791Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7aa53bf67ed28cd29da6c7496f44f92edbc172a61b6a4368617dbddaf5c5101a pid=2980 runtime=io.containerd.runc.v2
Feb  9 18:43:46.207534 systemd-resolved[1158]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Feb  9 18:43:46.225121 env[1222]: time="2024-02-09T18:43:46.225088182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:fb40ebff-fc74-4001-884a-1693b173da76,Namespace:default,Attempt:0,} returns sandbox id \"7aa53bf67ed28cd29da6c7496f44f92edbc172a61b6a4368617dbddaf5c5101a\""
Feb  9 18:43:46.226500 env[1222]: time="2024-02-09T18:43:46.226469084Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\""
Feb  9 18:43:46.533243 env[1222]: time="2024-02-09T18:43:46.532814953Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:43:46.534300 env[1222]: time="2024-02-09T18:43:46.534269496Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:43:46.535720 env[1222]: time="2024-02-09T18:43:46.535695239Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:43:46.537316 env[1222]: time="2024-02-09T18:43:46.537288145Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:43:46.538692 env[1222]: time="2024-02-09T18:43:46.538662047Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\""
Feb  9 18:43:46.540306 env[1222]: time="2024-02-09T18:43:46.540272473Z" level=info msg="CreateContainer within sandbox \"7aa53bf67ed28cd29da6c7496f44f92edbc172a61b6a4368617dbddaf5c5101a\" for container &ContainerMetadata{Name:test,Attempt:0,}"
Feb  9 18:43:46.549193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2107339988.mount: Deactivated successfully.
Feb  9 18:43:46.551444 env[1222]: time="2024-02-09T18:43:46.551411653Z" level=info msg="CreateContainer within sandbox \"7aa53bf67ed28cd29da6c7496f44f92edbc172a61b6a4368617dbddaf5c5101a\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"fc5dad00b46cb50e04dbe49c1c0fde67a1b783dc5450f3244534b35a1b8c3113\""
Feb  9 18:43:46.552139 env[1222]: time="2024-02-09T18:43:46.552115985Z" level=info msg="StartContainer for \"fc5dad00b46cb50e04dbe49c1c0fde67a1b783dc5450f3244534b35a1b8c3113\""
Feb  9 18:43:46.581882 kubelet[1521]: E0209 18:43:46.581849    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:46.601962 env[1222]: time="2024-02-09T18:43:46.601925389Z" level=info msg="StartContainer for \"fc5dad00b46cb50e04dbe49c1c0fde67a1b783dc5450f3244534b35a1b8c3113\" returns successfully"
Feb  9 18:43:46.816872 kubelet[1521]: I0209 18:43:46.816383    1521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=-9.223372021038427e+09 pod.CreationTimestamp="2024-02-09 18:43:31 +0000 UTC" firstStartedPulling="2024-02-09 18:43:46.226162239 +0000 UTC m=+57.563509476" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:43:46.81616461 +0000 UTC m=+58.153511847" watchObservedRunningTime="2024-02-09 18:43:46.816347933 +0000 UTC m=+58.153695170"
Feb  9 18:43:47.496964 systemd-networkd[1103]: lxc0e5b55d47317: Gained IPv6LL
Feb  9 18:43:47.582194 kubelet[1521]: E0209 18:43:47.582162    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:48.583255 kubelet[1521]: E0209 18:43:48.583191    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:49.537830 kubelet[1521]: E0209 18:43:49.537765    1521 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:49.584045 kubelet[1521]: E0209 18:43:49.584003    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:50.585113 kubelet[1521]: E0209 18:43:50.585071    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:51.586105 kubelet[1521]: E0209 18:43:51.586044    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:52.586263 kubelet[1521]: E0209 18:43:52.586211    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:53.427115 systemd[1]: run-containerd-runc-k8s.io-876b3e0c46db7596f3f3fc4a783ed437170c05365bb27b0caefdd25039e75a0c-runc.kkbF6w.mount: Deactivated successfully.
Feb  9 18:43:53.454681 env[1222]: time="2024-02-09T18:43:53.454624249Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb  9 18:43:53.459659 env[1222]: time="2024-02-09T18:43:53.459620394Z" level=info msg="StopContainer for \"876b3e0c46db7596f3f3fc4a783ed437170c05365bb27b0caefdd25039e75a0c\" with timeout 1 (s)"
Feb  9 18:43:53.459902 env[1222]: time="2024-02-09T18:43:53.459879518Z" level=info msg="Stop container \"876b3e0c46db7596f3f3fc4a783ed437170c05365bb27b0caefdd25039e75a0c\" with signal terminated"
Feb  9 18:43:53.465254 systemd-networkd[1103]: lxc_health: Link DOWN
Feb  9 18:43:53.465260 systemd-networkd[1103]: lxc_health: Lost carrier
Feb  9 18:43:53.521686 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-876b3e0c46db7596f3f3fc4a783ed437170c05365bb27b0caefdd25039e75a0c-rootfs.mount: Deactivated successfully.
Feb  9 18:43:53.532539 env[1222]: time="2024-02-09T18:43:53.532488622Z" level=info msg="shim disconnected" id=876b3e0c46db7596f3f3fc4a783ed437170c05365bb27b0caefdd25039e75a0c
Feb  9 18:43:53.532539 env[1222]: time="2024-02-09T18:43:53.532540383Z" level=warning msg="cleaning up after shim disconnected" id=876b3e0c46db7596f3f3fc4a783ed437170c05365bb27b0caefdd25039e75a0c namespace=k8s.io
Feb  9 18:43:53.532731 env[1222]: time="2024-02-09T18:43:53.532550383Z" level=info msg="cleaning up dead shim"
Feb  9 18:43:53.539070 env[1222]: time="2024-02-09T18:43:53.539025947Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:43:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3114 runtime=io.containerd.runc.v2\n"
Feb  9 18:43:53.541103 env[1222]: time="2024-02-09T18:43:53.541066014Z" level=info msg="StopContainer for \"876b3e0c46db7596f3f3fc4a783ed437170c05365bb27b0caefdd25039e75a0c\" returns successfully"
Feb  9 18:43:53.541719 env[1222]: time="2024-02-09T18:43:53.541674062Z" level=info msg="StopPodSandbox for \"53b5485c552c1b29c24cc960a84348e82691c935ae40ee1e7d55e5e6e4c52fc1\""
Feb  9 18:43:53.541850 env[1222]: time="2024-02-09T18:43:53.541739823Z" level=info msg="Container to stop \"b07ac15840e8880c634ed67191ba77177b7099ad238b136ff36977b354e3567e\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  9 18:43:53.541850 env[1222]: time="2024-02-09T18:43:53.541755183Z" level=info msg="Container to stop \"a5a3a2ed92c661aba3a0ffeb6e419885e44b9fa7068756fa41ca62b8ef76b4c1\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  9 18:43:53.541850 env[1222]: time="2024-02-09T18:43:53.541766023Z" level=info msg="Container to stop \"876b3e0c46db7596f3f3fc4a783ed437170c05365bb27b0caefdd25039e75a0c\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  9 18:43:53.541850 env[1222]: time="2024-02-09T18:43:53.541778743Z" level=info msg="Container to stop \"1063ac6b2abedfa80036ebcc90232301bd1b5743f19e138b75d2edd89adb8dce\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  9 18:43:53.541850 env[1222]: time="2024-02-09T18:43:53.541809023Z" level=info msg="Container to stop \"bca7c328f9d04a619273ca39b207bca6eebcadcb68778f8d660fa807ffb80ec3\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  9 18:43:53.543316 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-53b5485c552c1b29c24cc960a84348e82691c935ae40ee1e7d55e5e6e4c52fc1-shm.mount: Deactivated successfully.
Feb  9 18:43:53.568028 env[1222]: time="2024-02-09T18:43:53.567973844Z" level=info msg="shim disconnected" id=53b5485c552c1b29c24cc960a84348e82691c935ae40ee1e7d55e5e6e4c52fc1
Feb  9 18:43:53.568028 env[1222]: time="2024-02-09T18:43:53.568022804Z" level=warning msg="cleaning up after shim disconnected" id=53b5485c552c1b29c24cc960a84348e82691c935ae40ee1e7d55e5e6e4c52fc1 namespace=k8s.io
Feb  9 18:43:53.568028 env[1222]: time="2024-02-09T18:43:53.568032245Z" level=info msg="cleaning up dead shim"
Feb  9 18:43:53.575225 env[1222]: time="2024-02-09T18:43:53.574969375Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:43:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3147 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T18:43:53Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n"
Feb  9 18:43:53.575338 env[1222]: time="2024-02-09T18:43:53.575257019Z" level=info msg="TearDown network for sandbox \"53b5485c552c1b29c24cc960a84348e82691c935ae40ee1e7d55e5e6e4c52fc1\" successfully"
Feb  9 18:43:53.575338 env[1222]: time="2024-02-09T18:43:53.575278499Z" level=info msg="StopPodSandbox for \"53b5485c552c1b29c24cc960a84348e82691c935ae40ee1e7d55e5e6e4c52fc1\" returns successfully"
Feb  9 18:43:53.587103 kubelet[1521]: E0209 18:43:53.587062    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:53.659680 kubelet[1521]: I0209 18:43:53.659639    1521 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a7cd3c62-d2a9-42dc-940e-44be03fd2442-host-proc-sys-kernel\") pod \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\" (UID: \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\") "
Feb  9 18:43:53.659772 kubelet[1521]: I0209 18:43:53.659690    1521 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7cd3c62-d2a9-42dc-940e-44be03fd2442-lib-modules\") pod \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\" (UID: \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\") "
Feb  9 18:43:53.659772 kubelet[1521]: I0209 18:43:53.659710    1521 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a7cd3c62-d2a9-42dc-940e-44be03fd2442-cni-path\") pod \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\" (UID: \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\") "
Feb  9 18:43:53.659772 kubelet[1521]: I0209 18:43:53.659733    1521 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a7cd3c62-d2a9-42dc-940e-44be03fd2442-clustermesh-secrets\") pod \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\" (UID: \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\") "
Feb  9 18:43:53.659772 kubelet[1521]: I0209 18:43:53.659749    1521 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a7cd3c62-d2a9-42dc-940e-44be03fd2442-hostproc\") pod \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\" (UID: \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\") "
Feb  9 18:43:53.659772 kubelet[1521]: I0209 18:43:53.659764    1521 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a7cd3c62-d2a9-42dc-940e-44be03fd2442-bpf-maps\") pod \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\" (UID: \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\") "
Feb  9 18:43:53.659953 kubelet[1521]: I0209 18:43:53.659781    1521 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a7cd3c62-d2a9-42dc-940e-44be03fd2442-cilium-cgroup\") pod \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\" (UID: \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\") "
Feb  9 18:43:53.659953 kubelet[1521]: I0209 18:43:53.659821    1521 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a7cd3c62-d2a9-42dc-940e-44be03fd2442-xtables-lock\") pod \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\" (UID: \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\") "
Feb  9 18:43:53.659953 kubelet[1521]: I0209 18:43:53.659855    1521 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a7cd3c62-d2a9-42dc-940e-44be03fd2442-etc-cni-netd\") pod \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\" (UID: \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\") "
Feb  9 18:43:53.659953 kubelet[1521]: I0209 18:43:53.659878    1521 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a7cd3c62-d2a9-42dc-940e-44be03fd2442-cilium-config-path\") pod \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\" (UID: \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\") "
Feb  9 18:43:53.659953 kubelet[1521]: I0209 18:43:53.659903    1521 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a7cd3c62-d2a9-42dc-940e-44be03fd2442-hubble-tls\") pod \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\" (UID: \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\") "
Feb  9 18:43:53.659953 kubelet[1521]: I0209 18:43:53.659919    1521 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7cd3c62-d2a9-42dc-940e-44be03fd2442-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a7cd3c62-d2a9-42dc-940e-44be03fd2442" (UID: "a7cd3c62-d2a9-42dc-940e-44be03fd2442"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 18:43:53.660086 kubelet[1521]: I0209 18:43:53.659931    1521 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wk299\" (UniqueName: \"kubernetes.io/projected/a7cd3c62-d2a9-42dc-940e-44be03fd2442-kube-api-access-wk299\") pod \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\" (UID: \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\") "
Feb  9 18:43:53.660086 kubelet[1521]: I0209 18:43:53.659989    1521 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a7cd3c62-d2a9-42dc-940e-44be03fd2442-cilium-run\") pod \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\" (UID: \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\") "
Feb  9 18:43:53.660086 kubelet[1521]: I0209 18:43:53.660013    1521 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a7cd3c62-d2a9-42dc-940e-44be03fd2442-host-proc-sys-net\") pod \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\" (UID: \"a7cd3c62-d2a9-42dc-940e-44be03fd2442\") "
Feb  9 18:43:53.660086 kubelet[1521]: I0209 18:43:53.660044    1521 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7cd3c62-d2a9-42dc-940e-44be03fd2442-lib-modules\") on node \"10.0.0.123\" DevicePath \"\""
Feb  9 18:43:53.660086 kubelet[1521]: I0209 18:43:53.660061    1521 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7cd3c62-d2a9-42dc-940e-44be03fd2442-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a7cd3c62-d2a9-42dc-940e-44be03fd2442" (UID: "a7cd3c62-d2a9-42dc-940e-44be03fd2442"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 18:43:53.660086 kubelet[1521]: I0209 18:43:53.660079    1521 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7cd3c62-d2a9-42dc-940e-44be03fd2442-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a7cd3c62-d2a9-42dc-940e-44be03fd2442" (UID: "a7cd3c62-d2a9-42dc-940e-44be03fd2442"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 18:43:53.660228 kubelet[1521]: I0209 18:43:53.660095    1521 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7cd3c62-d2a9-42dc-940e-44be03fd2442-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a7cd3c62-d2a9-42dc-940e-44be03fd2442" (UID: "a7cd3c62-d2a9-42dc-940e-44be03fd2442"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 18:43:53.660228 kubelet[1521]: I0209 18:43:53.660111    1521 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7cd3c62-d2a9-42dc-940e-44be03fd2442-cni-path" (OuterVolumeSpecName: "cni-path") pod "a7cd3c62-d2a9-42dc-940e-44be03fd2442" (UID: "a7cd3c62-d2a9-42dc-940e-44be03fd2442"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 18:43:53.660228 kubelet[1521]: I0209 18:43:53.660126    1521 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7cd3c62-d2a9-42dc-940e-44be03fd2442-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a7cd3c62-d2a9-42dc-940e-44be03fd2442" (UID: "a7cd3c62-d2a9-42dc-940e-44be03fd2442"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 18:43:53.660228 kubelet[1521]: I0209 18:43:53.660139    1521 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7cd3c62-d2a9-42dc-940e-44be03fd2442-hostproc" (OuterVolumeSpecName: "hostproc") pod "a7cd3c62-d2a9-42dc-940e-44be03fd2442" (UID: "a7cd3c62-d2a9-42dc-940e-44be03fd2442"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 18:43:53.660228 kubelet[1521]: I0209 18:43:53.660154    1521 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7cd3c62-d2a9-42dc-940e-44be03fd2442-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a7cd3c62-d2a9-42dc-940e-44be03fd2442" (UID: "a7cd3c62-d2a9-42dc-940e-44be03fd2442"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 18:43:53.660339 kubelet[1521]: I0209 18:43:53.660167    1521 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7cd3c62-d2a9-42dc-940e-44be03fd2442-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a7cd3c62-d2a9-42dc-940e-44be03fd2442" (UID: "a7cd3c62-d2a9-42dc-940e-44be03fd2442"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 18:43:53.660339 kubelet[1521]: W0209 18:43:53.660294    1521 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/a7cd3c62-d2a9-42dc-940e-44be03fd2442/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled
Feb  9 18:43:53.660458 kubelet[1521]: I0209 18:43:53.660423    1521 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7cd3c62-d2a9-42dc-940e-44be03fd2442-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a7cd3c62-d2a9-42dc-940e-44be03fd2442" (UID: "a7cd3c62-d2a9-42dc-940e-44be03fd2442"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 18:43:53.662039 kubelet[1521]: I0209 18:43:53.662002    1521 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7cd3c62-d2a9-42dc-940e-44be03fd2442-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a7cd3c62-d2a9-42dc-940e-44be03fd2442" (UID: "a7cd3c62-d2a9-42dc-940e-44be03fd2442"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Feb  9 18:43:53.662487 kubelet[1521]: I0209 18:43:53.662452    1521 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7cd3c62-d2a9-42dc-940e-44be03fd2442-kube-api-access-wk299" (OuterVolumeSpecName: "kube-api-access-wk299") pod "a7cd3c62-d2a9-42dc-940e-44be03fd2442" (UID: "a7cd3c62-d2a9-42dc-940e-44be03fd2442"). InnerVolumeSpecName "kube-api-access-wk299". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb  9 18:43:53.663184 kubelet[1521]: I0209 18:43:53.663158    1521 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7cd3c62-d2a9-42dc-940e-44be03fd2442-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a7cd3c62-d2a9-42dc-940e-44be03fd2442" (UID: "a7cd3c62-d2a9-42dc-940e-44be03fd2442"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Feb  9 18:43:53.663370 kubelet[1521]: I0209 18:43:53.663328    1521 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7cd3c62-d2a9-42dc-940e-44be03fd2442-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a7cd3c62-d2a9-42dc-940e-44be03fd2442" (UID: "a7cd3c62-d2a9-42dc-940e-44be03fd2442"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb  9 18:43:53.761182 kubelet[1521]: I0209 18:43:53.761086    1521 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a7cd3c62-d2a9-42dc-940e-44be03fd2442-cilium-cgroup\") on node \"10.0.0.123\" DevicePath \"\""
Feb  9 18:43:53.761182 kubelet[1521]: I0209 18:43:53.761117    1521 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a7cd3c62-d2a9-42dc-940e-44be03fd2442-xtables-lock\") on node \"10.0.0.123\" DevicePath \"\""
Feb  9 18:43:53.761182 kubelet[1521]: I0209 18:43:53.761127    1521 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a7cd3c62-d2a9-42dc-940e-44be03fd2442-clustermesh-secrets\") on node \"10.0.0.123\" DevicePath \"\""
Feb  9 18:43:53.761182 kubelet[1521]: I0209 18:43:53.761139    1521 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a7cd3c62-d2a9-42dc-940e-44be03fd2442-hostproc\") on node \"10.0.0.123\" DevicePath \"\""
Feb  9 18:43:53.761182 kubelet[1521]: I0209 18:43:53.761147    1521 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a7cd3c62-d2a9-42dc-940e-44be03fd2442-bpf-maps\") on node \"10.0.0.123\" DevicePath \"\""
Feb  9 18:43:53.761182 kubelet[1521]: I0209 18:43:53.761156    1521 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a7cd3c62-d2a9-42dc-940e-44be03fd2442-cilium-config-path\") on node \"10.0.0.123\" DevicePath \"\""
Feb  9 18:43:53.761182 kubelet[1521]: I0209 18:43:53.761165    1521 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a7cd3c62-d2a9-42dc-940e-44be03fd2442-etc-cni-netd\") on node \"10.0.0.123\" DevicePath \"\""
Feb  9 18:43:53.761182 kubelet[1521]: I0209 18:43:53.761175    1521 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a7cd3c62-d2a9-42dc-940e-44be03fd2442-cilium-run\") on node \"10.0.0.123\" DevicePath \"\""
Feb  9 18:43:53.761418 kubelet[1521]: I0209 18:43:53.761184    1521 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a7cd3c62-d2a9-42dc-940e-44be03fd2442-host-proc-sys-net\") on node \"10.0.0.123\" DevicePath \"\""
Feb  9 18:43:53.761418 kubelet[1521]: I0209 18:43:53.761194    1521 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a7cd3c62-d2a9-42dc-940e-44be03fd2442-hubble-tls\") on node \"10.0.0.123\" DevicePath \"\""
Feb  9 18:43:53.761418 kubelet[1521]: I0209 18:43:53.761205    1521 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-wk299\" (UniqueName: \"kubernetes.io/projected/a7cd3c62-d2a9-42dc-940e-44be03fd2442-kube-api-access-wk299\") on node \"10.0.0.123\" DevicePath \"\""
Feb  9 18:43:53.761418 kubelet[1521]: I0209 18:43:53.761214    1521 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a7cd3c62-d2a9-42dc-940e-44be03fd2442-cni-path\") on node \"10.0.0.123\" DevicePath \"\""
Feb  9 18:43:53.761418 kubelet[1521]: I0209 18:43:53.761222    1521 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a7cd3c62-d2a9-42dc-940e-44be03fd2442-host-proc-sys-kernel\") on node \"10.0.0.123\" DevicePath \"\""
Feb  9 18:43:53.822777 kubelet[1521]: I0209 18:43:53.822732    1521 scope.go:115] "RemoveContainer" containerID="876b3e0c46db7596f3f3fc4a783ed437170c05365bb27b0caefdd25039e75a0c"
Feb  9 18:43:53.824596 env[1222]: time="2024-02-09T18:43:53.824554941Z" level=info msg="RemoveContainer for \"876b3e0c46db7596f3f3fc4a783ed437170c05365bb27b0caefdd25039e75a0c\""
Feb  9 18:43:53.831823 env[1222]: time="2024-02-09T18:43:53.831564952Z" level=info msg="RemoveContainer for \"876b3e0c46db7596f3f3fc4a783ed437170c05365bb27b0caefdd25039e75a0c\" returns successfully"
Feb  9 18:43:53.831925 kubelet[1521]: I0209 18:43:53.831808    1521 scope.go:115] "RemoveContainer" containerID="a5a3a2ed92c661aba3a0ffeb6e419885e44b9fa7068756fa41ca62b8ef76b4c1"
Feb  9 18:43:53.833065 env[1222]: time="2024-02-09T18:43:53.832749808Z" level=info msg="RemoveContainer for \"a5a3a2ed92c661aba3a0ffeb6e419885e44b9fa7068756fa41ca62b8ef76b4c1\""
Feb  9 18:43:53.835295 env[1222]: time="2024-02-09T18:43:53.835254120Z" level=info msg="RemoveContainer for \"a5a3a2ed92c661aba3a0ffeb6e419885e44b9fa7068756fa41ca62b8ef76b4c1\" returns successfully"
Feb  9 18:43:53.835439 kubelet[1521]: I0209 18:43:53.835421    1521 scope.go:115] "RemoveContainer" containerID="b07ac15840e8880c634ed67191ba77177b7099ad238b136ff36977b354e3567e"
Feb  9 18:43:53.836387 env[1222]: time="2024-02-09T18:43:53.836364655Z" level=info msg="RemoveContainer for \"b07ac15840e8880c634ed67191ba77177b7099ad238b136ff36977b354e3567e\""
Feb  9 18:43:53.839395 env[1222]: time="2024-02-09T18:43:53.839318893Z" level=info msg="RemoveContainer for \"b07ac15840e8880c634ed67191ba77177b7099ad238b136ff36977b354e3567e\" returns successfully"
Feb  9 18:43:53.839776 kubelet[1521]: I0209 18:43:53.839749    1521 scope.go:115] "RemoveContainer" containerID="bca7c328f9d04a619273ca39b207bca6eebcadcb68778f8d660fa807ffb80ec3"
Feb  9 18:43:53.841094 env[1222]: time="2024-02-09T18:43:53.841065556Z" level=info msg="RemoveContainer for \"bca7c328f9d04a619273ca39b207bca6eebcadcb68778f8d660fa807ffb80ec3\""
Feb  9 18:43:53.843613 env[1222]: time="2024-02-09T18:43:53.843582829Z" level=info msg="RemoveContainer for \"bca7c328f9d04a619273ca39b207bca6eebcadcb68778f8d660fa807ffb80ec3\" returns successfully"
Feb  9 18:43:53.843881 kubelet[1521]: I0209 18:43:53.843852    1521 scope.go:115] "RemoveContainer" containerID="1063ac6b2abedfa80036ebcc90232301bd1b5743f19e138b75d2edd89adb8dce"
Feb  9 18:43:53.844978 env[1222]: time="2024-02-09T18:43:53.844938686Z" level=info msg="RemoveContainer for \"1063ac6b2abedfa80036ebcc90232301bd1b5743f19e138b75d2edd89adb8dce\""
Feb  9 18:43:53.848960 env[1222]: time="2024-02-09T18:43:53.848926058Z" level=info msg="RemoveContainer for \"1063ac6b2abedfa80036ebcc90232301bd1b5743f19e138b75d2edd89adb8dce\" returns successfully"
Feb  9 18:43:53.849189 kubelet[1521]: I0209 18:43:53.849166    1521 scope.go:115] "RemoveContainer" containerID="876b3e0c46db7596f3f3fc4a783ed437170c05365bb27b0caefdd25039e75a0c"
Feb  9 18:43:53.849432 env[1222]: time="2024-02-09T18:43:53.849360264Z" level=error msg="ContainerStatus for \"876b3e0c46db7596f3f3fc4a783ed437170c05365bb27b0caefdd25039e75a0c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"876b3e0c46db7596f3f3fc4a783ed437170c05365bb27b0caefdd25039e75a0c\": not found"
Feb  9 18:43:53.849610 kubelet[1521]: E0209 18:43:53.849594    1521 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"876b3e0c46db7596f3f3fc4a783ed437170c05365bb27b0caefdd25039e75a0c\": not found" containerID="876b3e0c46db7596f3f3fc4a783ed437170c05365bb27b0caefdd25039e75a0c"
Feb  9 18:43:53.849660 kubelet[1521]: I0209 18:43:53.849626    1521 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:876b3e0c46db7596f3f3fc4a783ed437170c05365bb27b0caefdd25039e75a0c} err="failed to get container status \"876b3e0c46db7596f3f3fc4a783ed437170c05365bb27b0caefdd25039e75a0c\": rpc error: code = NotFound desc = an error occurred when try to find container \"876b3e0c46db7596f3f3fc4a783ed437170c05365bb27b0caefdd25039e75a0c\": not found"
Feb  9 18:43:53.849660 kubelet[1521]: I0209 18:43:53.849638    1521 scope.go:115] "RemoveContainer" containerID="a5a3a2ed92c661aba3a0ffeb6e419885e44b9fa7068756fa41ca62b8ef76b4c1"
Feb  9 18:43:53.851224 env[1222]: time="2024-02-09T18:43:53.850578680Z" level=error msg="ContainerStatus for \"a5a3a2ed92c661aba3a0ffeb6e419885e44b9fa7068756fa41ca62b8ef76b4c1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a5a3a2ed92c661aba3a0ffeb6e419885e44b9fa7068756fa41ca62b8ef76b4c1\": not found"
Feb  9 18:43:53.851224 env[1222]: time="2024-02-09T18:43:53.851003805Z" level=error msg="ContainerStatus for \"b07ac15840e8880c634ed67191ba77177b7099ad238b136ff36977b354e3567e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b07ac15840e8880c634ed67191ba77177b7099ad238b136ff36977b354e3567e\": not found"
Feb  9 18:43:53.851339 kubelet[1521]: E0209 18:43:53.850766    1521 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a5a3a2ed92c661aba3a0ffeb6e419885e44b9fa7068756fa41ca62b8ef76b4c1\": not found" containerID="a5a3a2ed92c661aba3a0ffeb6e419885e44b9fa7068756fa41ca62b8ef76b4c1"
Feb  9 18:43:53.851339 kubelet[1521]: I0209 18:43:53.850822    1521 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:a5a3a2ed92c661aba3a0ffeb6e419885e44b9fa7068756fa41ca62b8ef76b4c1} err="failed to get container status \"a5a3a2ed92c661aba3a0ffeb6e419885e44b9fa7068756fa41ca62b8ef76b4c1\": rpc error: code = NotFound desc = an error occurred when try to find container \"a5a3a2ed92c661aba3a0ffeb6e419885e44b9fa7068756fa41ca62b8ef76b4c1\": not found"
Feb  9 18:43:53.851339 kubelet[1521]: I0209 18:43:53.850833    1521 scope.go:115] "RemoveContainer" containerID="b07ac15840e8880c634ed67191ba77177b7099ad238b136ff36977b354e3567e"
Feb  9 18:43:53.851339 kubelet[1521]: E0209 18:43:53.851147    1521 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b07ac15840e8880c634ed67191ba77177b7099ad238b136ff36977b354e3567e\": not found" containerID="b07ac15840e8880c634ed67191ba77177b7099ad238b136ff36977b354e3567e"
Feb  9 18:43:53.851339 kubelet[1521]: I0209 18:43:53.851169    1521 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:b07ac15840e8880c634ed67191ba77177b7099ad238b136ff36977b354e3567e} err="failed to get container status \"b07ac15840e8880c634ed67191ba77177b7099ad238b136ff36977b354e3567e\": rpc error: code = NotFound desc = an error occurred when try to find container \"b07ac15840e8880c634ed67191ba77177b7099ad238b136ff36977b354e3567e\": not found"
Feb  9 18:43:53.851339 kubelet[1521]: I0209 18:43:53.851178    1521 scope.go:115] "RemoveContainer" containerID="bca7c328f9d04a619273ca39b207bca6eebcadcb68778f8d660fa807ffb80ec3"
Feb  9 18:43:53.851475 kubelet[1521]: E0209 18:43:53.851461    1521 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bca7c328f9d04a619273ca39b207bca6eebcadcb68778f8d660fa807ffb80ec3\": not found" containerID="bca7c328f9d04a619273ca39b207bca6eebcadcb68778f8d660fa807ffb80ec3"
Feb  9 18:43:53.851501 env[1222]: time="2024-02-09T18:43:53.851310889Z" level=error msg="ContainerStatus for \"bca7c328f9d04a619273ca39b207bca6eebcadcb68778f8d660fa807ffb80ec3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bca7c328f9d04a619273ca39b207bca6eebcadcb68778f8d660fa807ffb80ec3\": not found"
Feb  9 18:43:53.851528 kubelet[1521]: I0209 18:43:53.851484    1521 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:bca7c328f9d04a619273ca39b207bca6eebcadcb68778f8d660fa807ffb80ec3} err="failed to get container status \"bca7c328f9d04a619273ca39b207bca6eebcadcb68778f8d660fa807ffb80ec3\": rpc error: code = NotFound desc = an error occurred when try to find container \"bca7c328f9d04a619273ca39b207bca6eebcadcb68778f8d660fa807ffb80ec3\": not found"
Feb  9 18:43:53.851528 kubelet[1521]: I0209 18:43:53.851494    1521 scope.go:115] "RemoveContainer" containerID="1063ac6b2abedfa80036ebcc90232301bd1b5743f19e138b75d2edd89adb8dce"
Feb  9 18:43:53.851725 env[1222]: time="2024-02-09T18:43:53.851622693Z" level=error msg="ContainerStatus for \"1063ac6b2abedfa80036ebcc90232301bd1b5743f19e138b75d2edd89adb8dce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1063ac6b2abedfa80036ebcc90232301bd1b5743f19e138b75d2edd89adb8dce\": not found"
Feb  9 18:43:53.851987 kubelet[1521]: E0209 18:43:53.851943    1521 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1063ac6b2abedfa80036ebcc90232301bd1b5743f19e138b75d2edd89adb8dce\": not found" containerID="1063ac6b2abedfa80036ebcc90232301bd1b5743f19e138b75d2edd89adb8dce"
Feb  9 18:43:53.852111 kubelet[1521]: I0209 18:43:53.852098    1521 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:1063ac6b2abedfa80036ebcc90232301bd1b5743f19e138b75d2edd89adb8dce} err="failed to get container status \"1063ac6b2abedfa80036ebcc90232301bd1b5743f19e138b75d2edd89adb8dce\": rpc error: code = NotFound desc = an error occurred when try to find container \"1063ac6b2abedfa80036ebcc90232301bd1b5743f19e138b75d2edd89adb8dce\": not found"
Feb  9 18:43:54.423524 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53b5485c552c1b29c24cc960a84348e82691c935ae40ee1e7d55e5e6e4c52fc1-rootfs.mount: Deactivated successfully.
Feb  9 18:43:54.423696 systemd[1]: var-lib-kubelet-pods-a7cd3c62\x2dd2a9\x2d42dc\x2d940e\x2d44be03fd2442-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwk299.mount: Deactivated successfully.
Feb  9 18:43:54.423780 systemd[1]: var-lib-kubelet-pods-a7cd3c62\x2dd2a9\x2d42dc\x2d940e\x2d44be03fd2442-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Feb  9 18:43:54.423893 systemd[1]: var-lib-kubelet-pods-a7cd3c62\x2dd2a9\x2d42dc\x2d940e\x2d44be03fd2442-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Feb  9 18:43:54.588005 kubelet[1521]: E0209 18:43:54.587965    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:54.595079 kubelet[1521]: E0209 18:43:54.595063    1521 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb  9 18:43:55.589182 kubelet[1521]: E0209 18:43:55.589109    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:55.706565 kubelet[1521]: I0209 18:43:55.706521    1521 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=a7cd3c62-d2a9-42dc-940e-44be03fd2442 path="/var/lib/kubelet/pods/a7cd3c62-d2a9-42dc-940e-44be03fd2442/volumes"
Feb  9 18:43:56.589875 kubelet[1521]: E0209 18:43:56.589830    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:56.751246 kubelet[1521]: I0209 18:43:56.751201    1521 topology_manager.go:210] "Topology Admit Handler"
Feb  9 18:43:56.751246 kubelet[1521]: E0209 18:43:56.751248    1521 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a7cd3c62-d2a9-42dc-940e-44be03fd2442" containerName="mount-bpf-fs"
Feb  9 18:43:56.751246 kubelet[1521]: E0209 18:43:56.751258    1521 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a7cd3c62-d2a9-42dc-940e-44be03fd2442" containerName="clean-cilium-state"
Feb  9 18:43:56.751449 kubelet[1521]: E0209 18:43:56.751266    1521 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a7cd3c62-d2a9-42dc-940e-44be03fd2442" containerName="cilium-agent"
Feb  9 18:43:56.751449 kubelet[1521]: E0209 18:43:56.751273    1521 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a7cd3c62-d2a9-42dc-940e-44be03fd2442" containerName="apply-sysctl-overwrites"
Feb  9 18:43:56.751449 kubelet[1521]: E0209 18:43:56.751280    1521 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a7cd3c62-d2a9-42dc-940e-44be03fd2442" containerName="mount-cgroup"
Feb  9 18:43:56.751449 kubelet[1521]: I0209 18:43:56.751296    1521 memory_manager.go:346] "RemoveStaleState removing state" podUID="a7cd3c62-d2a9-42dc-940e-44be03fd2442" containerName="cilium-agent"
Feb  9 18:43:56.758568 kubelet[1521]: I0209 18:43:56.758534    1521 topology_manager.go:210] "Topology Admit Handler"
Feb  9 18:43:56.876102 kubelet[1521]: I0209 18:43:56.875990    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47cxp\" (UniqueName: \"kubernetes.io/projected/cd828511-6077-4789-b5c0-aa041725d6a7-kube-api-access-47cxp\") pod \"cilium-operator-f59cbd8c6-mpzx9\" (UID: \"cd828511-6077-4789-b5c0-aa041725d6a7\") " pod="kube-system/cilium-operator-f59cbd8c6-mpzx9"
Feb  9 18:43:56.876102 kubelet[1521]: I0209 18:43:56.876031    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e2231165-68d6-446c-a11f-8ed7a383b659-hostproc\") pod \"cilium-jbvc5\" (UID: \"e2231165-68d6-446c-a11f-8ed7a383b659\") " pod="kube-system/cilium-jbvc5"
Feb  9 18:43:56.876102 kubelet[1521]: I0209 18:43:56.876054    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e2231165-68d6-446c-a11f-8ed7a383b659-cilium-config-path\") pod \"cilium-jbvc5\" (UID: \"e2231165-68d6-446c-a11f-8ed7a383b659\") " pod="kube-system/cilium-jbvc5"
Feb  9 18:43:56.876102 kubelet[1521]: I0209 18:43:56.876075    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e2231165-68d6-446c-a11f-8ed7a383b659-cilium-run\") pod \"cilium-jbvc5\" (UID: \"e2231165-68d6-446c-a11f-8ed7a383b659\") " pod="kube-system/cilium-jbvc5"
Feb  9 18:43:56.876102 kubelet[1521]: I0209 18:43:56.876096    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e2231165-68d6-446c-a11f-8ed7a383b659-bpf-maps\") pod \"cilium-jbvc5\" (UID: \"e2231165-68d6-446c-a11f-8ed7a383b659\") " pod="kube-system/cilium-jbvc5"
Feb  9 18:43:56.876322 kubelet[1521]: I0209 18:43:56.876116    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e2231165-68d6-446c-a11f-8ed7a383b659-cilium-cgroup\") pod \"cilium-jbvc5\" (UID: \"e2231165-68d6-446c-a11f-8ed7a383b659\") " pod="kube-system/cilium-jbvc5"
Feb  9 18:43:56.876322 kubelet[1521]: I0209 18:43:56.876137    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2231165-68d6-446c-a11f-8ed7a383b659-lib-modules\") pod \"cilium-jbvc5\" (UID: \"e2231165-68d6-446c-a11f-8ed7a383b659\") " pod="kube-system/cilium-jbvc5"
Feb  9 18:43:56.876322 kubelet[1521]: I0209 18:43:56.876156    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2231165-68d6-446c-a11f-8ed7a383b659-xtables-lock\") pod \"cilium-jbvc5\" (UID: \"e2231165-68d6-446c-a11f-8ed7a383b659\") " pod="kube-system/cilium-jbvc5"
Feb  9 18:43:56.876322 kubelet[1521]: I0209 18:43:56.876175    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e2231165-68d6-446c-a11f-8ed7a383b659-host-proc-sys-net\") pod \"cilium-jbvc5\" (UID: \"e2231165-68d6-446c-a11f-8ed7a383b659\") " pod="kube-system/cilium-jbvc5"
Feb  9 18:43:56.876322 kubelet[1521]: I0209 18:43:56.876195    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e2231165-68d6-446c-a11f-8ed7a383b659-host-proc-sys-kernel\") pod \"cilium-jbvc5\" (UID: \"e2231165-68d6-446c-a11f-8ed7a383b659\") " pod="kube-system/cilium-jbvc5"
Feb  9 18:43:56.876322 kubelet[1521]: I0209 18:43:56.876226    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e2231165-68d6-446c-a11f-8ed7a383b659-hubble-tls\") pod \"cilium-jbvc5\" (UID: \"e2231165-68d6-446c-a11f-8ed7a383b659\") " pod="kube-system/cilium-jbvc5"
Feb  9 18:43:56.876454 kubelet[1521]: I0209 18:43:56.876246    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rspqv\" (UniqueName: \"kubernetes.io/projected/e2231165-68d6-446c-a11f-8ed7a383b659-kube-api-access-rspqv\") pod \"cilium-jbvc5\" (UID: \"e2231165-68d6-446c-a11f-8ed7a383b659\") " pod="kube-system/cilium-jbvc5"
Feb  9 18:43:56.876454 kubelet[1521]: I0209 18:43:56.876265    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e2231165-68d6-446c-a11f-8ed7a383b659-etc-cni-netd\") pod \"cilium-jbvc5\" (UID: \"e2231165-68d6-446c-a11f-8ed7a383b659\") " pod="kube-system/cilium-jbvc5"
Feb  9 18:43:56.876454 kubelet[1521]: I0209 18:43:56.876286    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e2231165-68d6-446c-a11f-8ed7a383b659-clustermesh-secrets\") pod \"cilium-jbvc5\" (UID: \"e2231165-68d6-446c-a11f-8ed7a383b659\") " pod="kube-system/cilium-jbvc5"
Feb  9 18:43:56.876454 kubelet[1521]: I0209 18:43:56.876304    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e2231165-68d6-446c-a11f-8ed7a383b659-cilium-ipsec-secrets\") pod \"cilium-jbvc5\" (UID: \"e2231165-68d6-446c-a11f-8ed7a383b659\") " pod="kube-system/cilium-jbvc5"
Feb  9 18:43:56.876454 kubelet[1521]: I0209 18:43:56.876326    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cd828511-6077-4789-b5c0-aa041725d6a7-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-mpzx9\" (UID: \"cd828511-6077-4789-b5c0-aa041725d6a7\") " pod="kube-system/cilium-operator-f59cbd8c6-mpzx9"
Feb  9 18:43:56.876561 kubelet[1521]: I0209 18:43:56.876346    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e2231165-68d6-446c-a11f-8ed7a383b659-cni-path\") pod \"cilium-jbvc5\" (UID: \"e2231165-68d6-446c-a11f-8ed7a383b659\") " pod="kube-system/cilium-jbvc5"
Feb  9 18:43:57.053652 kubelet[1521]: E0209 18:43:57.053617    1521 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:43:57.054134 env[1222]: time="2024-02-09T18:43:57.054089904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-mpzx9,Uid:cd828511-6077-4789-b5c0-aa041725d6a7,Namespace:kube-system,Attempt:0,}"
Feb  9 18:43:57.061521 kubelet[1521]: E0209 18:43:57.061497    1521 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:43:57.062510 env[1222]: time="2024-02-09T18:43:57.062185759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jbvc5,Uid:e2231165-68d6-446c-a11f-8ed7a383b659,Namespace:kube-system,Attempt:0,}"
Feb  9 18:43:57.066612 env[1222]: time="2024-02-09T18:43:57.066533370Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  9 18:43:57.066612 env[1222]: time="2024-02-09T18:43:57.066574891Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  9 18:43:57.066612 env[1222]: time="2024-02-09T18:43:57.066593891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  9 18:43:57.066779 env[1222]: time="2024-02-09T18:43:57.066732213Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7f6d4ddd776afc0aa0563db95512b45b765eea05f8926ca40c04eee74c53c13e pid=3177 runtime=io.containerd.runc.v2
Feb  9 18:43:57.072669 env[1222]: time="2024-02-09T18:43:57.072564721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  9 18:43:57.072669 env[1222]: time="2024-02-09T18:43:57.072642962Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  9 18:43:57.072858 env[1222]: time="2024-02-09T18:43:57.072654602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  9 18:43:57.073290 env[1222]: time="2024-02-09T18:43:57.073246769Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2c0286bbbc7e7bbb4baa6351b9682cbba53dc01f8cd8498c734c20a823f2cbcd pid=3200 runtime=io.containerd.runc.v2
Feb  9 18:43:57.136910 env[1222]: time="2024-02-09T18:43:57.136774196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jbvc5,Uid:e2231165-68d6-446c-a11f-8ed7a383b659,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c0286bbbc7e7bbb4baa6351b9682cbba53dc01f8cd8498c734c20a823f2cbcd\""
Feb  9 18:43:57.138165 kubelet[1521]: E0209 18:43:57.138139    1521 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:43:57.140138 env[1222]: time="2024-02-09T18:43:57.140101555Z" level=info msg="CreateContainer within sandbox \"2c0286bbbc7e7bbb4baa6351b9682cbba53dc01f8cd8498c734c20a823f2cbcd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Feb  9 18:43:57.146084 env[1222]: time="2024-02-09T18:43:57.146050665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-mpzx9,Uid:cd828511-6077-4789-b5c0-aa041725d6a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f6d4ddd776afc0aa0563db95512b45b765eea05f8926ca40c04eee74c53c13e\""
Feb  9 18:43:57.146633 kubelet[1521]: E0209 18:43:57.146613    1521 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:43:57.147975 env[1222]: time="2024-02-09T18:43:57.147925647Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\""
Feb  9 18:43:57.149582 env[1222]: time="2024-02-09T18:43:57.149527026Z" level=info msg="CreateContainer within sandbox \"2c0286bbbc7e7bbb4baa6351b9682cbba53dc01f8cd8498c734c20a823f2cbcd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"96fa554bcfe0c758027617f6b38aa65be5a20b4c86c03f793f5ab2b36f57c4f9\""
Feb  9 18:43:57.150120 env[1222]: time="2024-02-09T18:43:57.150093033Z" level=info msg="StartContainer for \"96fa554bcfe0c758027617f6b38aa65be5a20b4c86c03f793f5ab2b36f57c4f9\""
Feb  9 18:43:57.209632 env[1222]: time="2024-02-09T18:43:57.209582172Z" level=info msg="StartContainer for \"96fa554bcfe0c758027617f6b38aa65be5a20b4c86c03f793f5ab2b36f57c4f9\" returns successfully"
Feb  9 18:43:57.240898 env[1222]: time="2024-02-09T18:43:57.240847539Z" level=info msg="shim disconnected" id=96fa554bcfe0c758027617f6b38aa65be5a20b4c86c03f793f5ab2b36f57c4f9
Feb  9 18:43:57.240898 env[1222]: time="2024-02-09T18:43:57.240896620Z" level=warning msg="cleaning up after shim disconnected" id=96fa554bcfe0c758027617f6b38aa65be5a20b4c86c03f793f5ab2b36f57c4f9 namespace=k8s.io
Feb  9 18:43:57.240898 env[1222]: time="2024-02-09T18:43:57.240906700Z" level=info msg="cleaning up dead shim"
Feb  9 18:43:57.248382 env[1222]: time="2024-02-09T18:43:57.248349907Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:43:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3304 runtime=io.containerd.runc.v2\n"
Feb  9 18:43:57.590725 kubelet[1521]: E0209 18:43:57.590687    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:57.831634 env[1222]: time="2024-02-09T18:43:57.831600563Z" level=info msg="StopPodSandbox for \"2c0286bbbc7e7bbb4baa6351b9682cbba53dc01f8cd8498c734c20a823f2cbcd\""
Feb  9 18:43:57.831840 env[1222]: time="2024-02-09T18:43:57.831807085Z" level=info msg="Container to stop \"96fa554bcfe0c758027617f6b38aa65be5a20b4c86c03f793f5ab2b36f57c4f9\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  9 18:43:57.857093 env[1222]: time="2024-02-09T18:43:57.856985021Z" level=info msg="shim disconnected" id=2c0286bbbc7e7bbb4baa6351b9682cbba53dc01f8cd8498c734c20a823f2cbcd
Feb  9 18:43:57.857093 env[1222]: time="2024-02-09T18:43:57.857032382Z" level=warning msg="cleaning up after shim disconnected" id=2c0286bbbc7e7bbb4baa6351b9682cbba53dc01f8cd8498c734c20a823f2cbcd namespace=k8s.io
Feb  9 18:43:57.857093 env[1222]: time="2024-02-09T18:43:57.857041662Z" level=info msg="cleaning up dead shim"
Feb  9 18:43:57.865342 env[1222]: time="2024-02-09T18:43:57.865297239Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:43:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3337 runtime=io.containerd.runc.v2\n"
Feb  9 18:43:57.865623 env[1222]: time="2024-02-09T18:43:57.865583322Z" level=info msg="TearDown network for sandbox \"2c0286bbbc7e7bbb4baa6351b9682cbba53dc01f8cd8498c734c20a823f2cbcd\" successfully"
Feb  9 18:43:57.865623 env[1222]: time="2024-02-09T18:43:57.865613123Z" level=info msg="StopPodSandbox for \"2c0286bbbc7e7bbb4baa6351b9682cbba53dc01f8cd8498c734c20a823f2cbcd\" returns successfully"
Feb  9 18:43:57.984411 kubelet[1521]: I0209 18:43:57.984242    1521 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2231165-68d6-446c-a11f-8ed7a383b659-lib-modules\") pod \"e2231165-68d6-446c-a11f-8ed7a383b659\" (UID: \"e2231165-68d6-446c-a11f-8ed7a383b659\") "
Feb  9 18:43:57.984411 kubelet[1521]: I0209 18:43:57.984282    1521 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e2231165-68d6-446c-a11f-8ed7a383b659-bpf-maps\") pod \"e2231165-68d6-446c-a11f-8ed7a383b659\" (UID: \"e2231165-68d6-446c-a11f-8ed7a383b659\") "
Feb  9 18:43:57.984411 kubelet[1521]: I0209 18:43:57.984306    1521 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e2231165-68d6-446c-a11f-8ed7a383b659-cilium-cgroup\") pod \"e2231165-68d6-446c-a11f-8ed7a383b659\" (UID: \"e2231165-68d6-446c-a11f-8ed7a383b659\") "
Feb  9 18:43:57.984411 kubelet[1521]: I0209 18:43:57.984332    1521 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e2231165-68d6-446c-a11f-8ed7a383b659-cilium-ipsec-secrets\") pod \"e2231165-68d6-446c-a11f-8ed7a383b659\" (UID: \"e2231165-68d6-446c-a11f-8ed7a383b659\") "
Feb  9 18:43:57.984411 kubelet[1521]: I0209 18:43:57.984352    1521 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e2231165-68d6-446c-a11f-8ed7a383b659-host-proc-sys-net\") pod \"e2231165-68d6-446c-a11f-8ed7a383b659\" (UID: \"e2231165-68d6-446c-a11f-8ed7a383b659\") "
Feb  9 18:43:57.984411 kubelet[1521]: I0209 18:43:57.984346    1521 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2231165-68d6-446c-a11f-8ed7a383b659-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e2231165-68d6-446c-a11f-8ed7a383b659" (UID: "e2231165-68d6-446c-a11f-8ed7a383b659"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 18:43:57.984674 kubelet[1521]: I0209 18:43:57.984372    1521 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e2231165-68d6-446c-a11f-8ed7a383b659-hubble-tls\") pod \"e2231165-68d6-446c-a11f-8ed7a383b659\" (UID: \"e2231165-68d6-446c-a11f-8ed7a383b659\") "
Feb  9 18:43:57.984674 kubelet[1521]: I0209 18:43:57.984392    1521 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rspqv\" (UniqueName: \"kubernetes.io/projected/e2231165-68d6-446c-a11f-8ed7a383b659-kube-api-access-rspqv\") pod \"e2231165-68d6-446c-a11f-8ed7a383b659\" (UID: \"e2231165-68d6-446c-a11f-8ed7a383b659\") "
Feb  9 18:43:57.984674 kubelet[1521]: I0209 18:43:57.984392    1521 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2231165-68d6-446c-a11f-8ed7a383b659-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e2231165-68d6-446c-a11f-8ed7a383b659" (UID: "e2231165-68d6-446c-a11f-8ed7a383b659"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 18:43:57.984674 kubelet[1521]: I0209 18:43:57.984410    1521 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2231165-68d6-446c-a11f-8ed7a383b659-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e2231165-68d6-446c-a11f-8ed7a383b659" (UID: "e2231165-68d6-446c-a11f-8ed7a383b659"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 18:43:57.984674 kubelet[1521]: I0209 18:43:57.984426    1521 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2231165-68d6-446c-a11f-8ed7a383b659-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e2231165-68d6-446c-a11f-8ed7a383b659" (UID: "e2231165-68d6-446c-a11f-8ed7a383b659"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 18:43:57.986541 kubelet[1521]: I0209 18:43:57.984973    1521 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2231165-68d6-446c-a11f-8ed7a383b659-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e2231165-68d6-446c-a11f-8ed7a383b659" (UID: "e2231165-68d6-446c-a11f-8ed7a383b659"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 18:43:57.986541 kubelet[1521]: I0209 18:43:57.985088    1521 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e2231165-68d6-446c-a11f-8ed7a383b659-etc-cni-netd\") pod \"e2231165-68d6-446c-a11f-8ed7a383b659\" (UID: \"e2231165-68d6-446c-a11f-8ed7a383b659\") "
Feb  9 18:43:57.986541 kubelet[1521]: I0209 18:43:57.985116    1521 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e2231165-68d6-446c-a11f-8ed7a383b659-hostproc\") pod \"e2231165-68d6-446c-a11f-8ed7a383b659\" (UID: \"e2231165-68d6-446c-a11f-8ed7a383b659\") "
Feb  9 18:43:57.986541 kubelet[1521]: I0209 18:43:57.985138    1521 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e2231165-68d6-446c-a11f-8ed7a383b659-cilium-config-path\") pod \"e2231165-68d6-446c-a11f-8ed7a383b659\" (UID: \"e2231165-68d6-446c-a11f-8ed7a383b659\") "
Feb  9 18:43:57.986541 kubelet[1521]: I0209 18:43:57.985156    1521 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e2231165-68d6-446c-a11f-8ed7a383b659-host-proc-sys-kernel\") pod \"e2231165-68d6-446c-a11f-8ed7a383b659\" (UID: \"e2231165-68d6-446c-a11f-8ed7a383b659\") "
Feb  9 18:43:57.986541 kubelet[1521]: I0209 18:43:57.985195    1521 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e2231165-68d6-446c-a11f-8ed7a383b659-clustermesh-secrets\") pod \"e2231165-68d6-446c-a11f-8ed7a383b659\" (UID: \"e2231165-68d6-446c-a11f-8ed7a383b659\") "
Feb  9 18:43:57.986778 kubelet[1521]: I0209 18:43:57.985214    1521 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2231165-68d6-446c-a11f-8ed7a383b659-xtables-lock\") pod \"e2231165-68d6-446c-a11f-8ed7a383b659\" (UID: \"e2231165-68d6-446c-a11f-8ed7a383b659\") "
Feb  9 18:43:57.986778 kubelet[1521]: I0209 18:43:57.985230    1521 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e2231165-68d6-446c-a11f-8ed7a383b659-cni-path\") pod \"e2231165-68d6-446c-a11f-8ed7a383b659\" (UID: \"e2231165-68d6-446c-a11f-8ed7a383b659\") "
Feb  9 18:43:57.986778 kubelet[1521]: I0209 18:43:57.985249    1521 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e2231165-68d6-446c-a11f-8ed7a383b659-cilium-run\") pod \"e2231165-68d6-446c-a11f-8ed7a383b659\" (UID: \"e2231165-68d6-446c-a11f-8ed7a383b659\") "
Feb  9 18:43:57.986778 kubelet[1521]: I0209 18:43:57.985280    1521 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2231165-68d6-446c-a11f-8ed7a383b659-lib-modules\") on node \"10.0.0.123\" DevicePath \"\""
Feb  9 18:43:57.986778 kubelet[1521]: I0209 18:43:57.985290    1521 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e2231165-68d6-446c-a11f-8ed7a383b659-bpf-maps\") on node \"10.0.0.123\" DevicePath \"\""
Feb  9 18:43:57.986778 kubelet[1521]: I0209 18:43:57.985299    1521 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e2231165-68d6-446c-a11f-8ed7a383b659-etc-cni-netd\") on node \"10.0.0.123\" DevicePath \"\""
Feb  9 18:43:57.986778 kubelet[1521]: I0209 18:43:57.985310    1521 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e2231165-68d6-446c-a11f-8ed7a383b659-host-proc-sys-net\") on node \"10.0.0.123\" DevicePath \"\""
Feb  9 18:43:57.988592 kubelet[1521]: I0209 18:43:57.985332    1521 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2231165-68d6-446c-a11f-8ed7a383b659-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e2231165-68d6-446c-a11f-8ed7a383b659" (UID: "e2231165-68d6-446c-a11f-8ed7a383b659"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 18:43:57.988592 kubelet[1521]: I0209 18:43:57.985354    1521 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2231165-68d6-446c-a11f-8ed7a383b659-hostproc" (OuterVolumeSpecName: "hostproc") pod "e2231165-68d6-446c-a11f-8ed7a383b659" (UID: "e2231165-68d6-446c-a11f-8ed7a383b659"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 18:43:57.988592 kubelet[1521]: W0209 18:43:57.985470    1521 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/e2231165-68d6-446c-a11f-8ed7a383b659/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled
Feb  9 18:43:57.988592 kubelet[1521]: I0209 18:43:57.987116    1521 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2231165-68d6-446c-a11f-8ed7a383b659-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e2231165-68d6-446c-a11f-8ed7a383b659" (UID: "e2231165-68d6-446c-a11f-8ed7a383b659"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 18:43:57.988592 kubelet[1521]: I0209 18:43:57.987157    1521 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2231165-68d6-446c-a11f-8ed7a383b659-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e2231165-68d6-446c-a11f-8ed7a383b659" (UID: "e2231165-68d6-446c-a11f-8ed7a383b659"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 18:43:57.988366 systemd[1]: var-lib-kubelet-pods-e2231165\x2d68d6\x2d446c\x2da11f\x2d8ed7a383b659-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully.
Feb  9 18:43:57.989025 kubelet[1521]: I0209 18:43:57.987174    1521 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2231165-68d6-446c-a11f-8ed7a383b659-cni-path" (OuterVolumeSpecName: "cni-path") pod "e2231165-68d6-446c-a11f-8ed7a383b659" (UID: "e2231165-68d6-446c-a11f-8ed7a383b659"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 18:43:57.989025 kubelet[1521]: I0209 18:43:57.987505    1521 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2231165-68d6-446c-a11f-8ed7a383b659-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e2231165-68d6-446c-a11f-8ed7a383b659" (UID: "e2231165-68d6-446c-a11f-8ed7a383b659"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Feb  9 18:43:57.989454 kubelet[1521]: I0209 18:43:57.989412    1521 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2231165-68d6-446c-a11f-8ed7a383b659-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "e2231165-68d6-446c-a11f-8ed7a383b659" (UID: "e2231165-68d6-446c-a11f-8ed7a383b659"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Feb  9 18:43:57.990431 systemd[1]: var-lib-kubelet-pods-e2231165\x2d68d6\x2d446c\x2da11f\x2d8ed7a383b659-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Feb  9 18:43:57.992086 systemd[1]: var-lib-kubelet-pods-e2231165\x2d68d6\x2d446c\x2da11f\x2d8ed7a383b659-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drspqv.mount: Deactivated successfully.
Feb  9 18:43:57.992206 systemd[1]: var-lib-kubelet-pods-e2231165\x2d68d6\x2d446c\x2da11f\x2d8ed7a383b659-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Feb  9 18:43:57.993803 kubelet[1521]: I0209 18:43:57.993748    1521 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2231165-68d6-446c-a11f-8ed7a383b659-kube-api-access-rspqv" (OuterVolumeSpecName: "kube-api-access-rspqv") pod "e2231165-68d6-446c-a11f-8ed7a383b659" (UID: "e2231165-68d6-446c-a11f-8ed7a383b659"). InnerVolumeSpecName "kube-api-access-rspqv". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb  9 18:43:57.994043 kubelet[1521]: I0209 18:43:57.994002    1521 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2231165-68d6-446c-a11f-8ed7a383b659-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e2231165-68d6-446c-a11f-8ed7a383b659" (UID: "e2231165-68d6-446c-a11f-8ed7a383b659"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb  9 18:43:57.994322 kubelet[1521]: I0209 18:43:57.994288    1521 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2231165-68d6-446c-a11f-8ed7a383b659-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e2231165-68d6-446c-a11f-8ed7a383b659" (UID: "e2231165-68d6-446c-a11f-8ed7a383b659"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Feb  9 18:43:58.085986 kubelet[1521]: I0209 18:43:58.085953    1521 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e2231165-68d6-446c-a11f-8ed7a383b659-cilium-run\") on node \"10.0.0.123\" DevicePath \"\""
Feb  9 18:43:58.085986 kubelet[1521]: I0209 18:43:58.085987    1521 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2231165-68d6-446c-a11f-8ed7a383b659-xtables-lock\") on node \"10.0.0.123\" DevicePath \"\""
Feb  9 18:43:58.085986 kubelet[1521]: I0209 18:43:58.085997    1521 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e2231165-68d6-446c-a11f-8ed7a383b659-cni-path\") on node \"10.0.0.123\" DevicePath \"\""
Feb  9 18:43:58.086176 kubelet[1521]: I0209 18:43:58.086007    1521 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e2231165-68d6-446c-a11f-8ed7a383b659-cilium-cgroup\") on node \"10.0.0.123\" DevicePath \"\""
Feb  9 18:43:58.086176 kubelet[1521]: I0209 18:43:58.086020    1521 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-rspqv\" (UniqueName: \"kubernetes.io/projected/e2231165-68d6-446c-a11f-8ed7a383b659-kube-api-access-rspqv\") on node \"10.0.0.123\" DevicePath \"\""
Feb  9 18:43:58.086176 kubelet[1521]: I0209 18:43:58.086031    1521 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e2231165-68d6-446c-a11f-8ed7a383b659-cilium-ipsec-secrets\") on node \"10.0.0.123\" DevicePath \"\""
Feb  9 18:43:58.086176 kubelet[1521]: I0209 18:43:58.086040    1521 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e2231165-68d6-446c-a11f-8ed7a383b659-hubble-tls\") on node \"10.0.0.123\" DevicePath \"\""
Feb  9 18:43:58.086176 kubelet[1521]: I0209 18:43:58.086049    1521 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e2231165-68d6-446c-a11f-8ed7a383b659-hostproc\") on node \"10.0.0.123\" DevicePath \"\""
Feb  9 18:43:58.086176 kubelet[1521]: I0209 18:43:58.086058    1521 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e2231165-68d6-446c-a11f-8ed7a383b659-clustermesh-secrets\") on node \"10.0.0.123\" DevicePath \"\""
Feb  9 18:43:58.086176 kubelet[1521]: I0209 18:43:58.086068    1521 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e2231165-68d6-446c-a11f-8ed7a383b659-cilium-config-path\") on node \"10.0.0.123\" DevicePath \"\""
Feb  9 18:43:58.086176 kubelet[1521]: I0209 18:43:58.086078    1521 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e2231165-68d6-446c-a11f-8ed7a383b659-host-proc-sys-kernel\") on node \"10.0.0.123\" DevicePath \"\""
Feb  9 18:43:58.591547 kubelet[1521]: E0209 18:43:58.591509    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:58.835813 kubelet[1521]: I0209 18:43:58.835175    1521 scope.go:115] "RemoveContainer" containerID="96fa554bcfe0c758027617f6b38aa65be5a20b4c86c03f793f5ab2b36f57c4f9"
Feb  9 18:43:58.838676 env[1222]: time="2024-02-09T18:43:58.838644257Z" level=info msg="RemoveContainer for \"96fa554bcfe0c758027617f6b38aa65be5a20b4c86c03f793f5ab2b36f57c4f9\""
Feb  9 18:43:58.847896 env[1222]: time="2024-02-09T18:43:58.847822602Z" level=info msg="RemoveContainer for \"96fa554bcfe0c758027617f6b38aa65be5a20b4c86c03f793f5ab2b36f57c4f9\" returns successfully"
Feb  9 18:43:58.866722 kubelet[1521]: I0209 18:43:58.866692    1521 topology_manager.go:210] "Topology Admit Handler"
Feb  9 18:43:58.866904 kubelet[1521]: E0209 18:43:58.866744    1521 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e2231165-68d6-446c-a11f-8ed7a383b659" containerName="mount-cgroup"
Feb  9 18:43:58.866904 kubelet[1521]: I0209 18:43:58.866767    1521 memory_manager.go:346] "RemoveStaleState removing state" podUID="e2231165-68d6-446c-a11f-8ed7a383b659" containerName="mount-cgroup"
Feb  9 18:43:58.990911 kubelet[1521]: I0209 18:43:58.990772    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/813f93bd-5812-4ad0-96fb-4473ff856506-bpf-maps\") pod \"cilium-pvr6v\" (UID: \"813f93bd-5812-4ad0-96fb-4473ff856506\") " pod="kube-system/cilium-pvr6v"
Feb  9 18:43:58.990911 kubelet[1521]: I0209 18:43:58.990824    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/813f93bd-5812-4ad0-96fb-4473ff856506-cilium-ipsec-secrets\") pod \"cilium-pvr6v\" (UID: \"813f93bd-5812-4ad0-96fb-4473ff856506\") " pod="kube-system/cilium-pvr6v"
Feb  9 18:43:58.990911 kubelet[1521]: I0209 18:43:58.990852    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/813f93bd-5812-4ad0-96fb-4473ff856506-hubble-tls\") pod \"cilium-pvr6v\" (UID: \"813f93bd-5812-4ad0-96fb-4473ff856506\") " pod="kube-system/cilium-pvr6v"
Feb  9 18:43:58.990911 kubelet[1521]: I0209 18:43:58.990895    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jfmp\" (UniqueName: \"kubernetes.io/projected/813f93bd-5812-4ad0-96fb-4473ff856506-kube-api-access-9jfmp\") pod \"cilium-pvr6v\" (UID: \"813f93bd-5812-4ad0-96fb-4473ff856506\") " pod="kube-system/cilium-pvr6v"
Feb  9 18:43:58.991114 kubelet[1521]: I0209 18:43:58.990951    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/813f93bd-5812-4ad0-96fb-4473ff856506-host-proc-sys-net\") pod \"cilium-pvr6v\" (UID: \"813f93bd-5812-4ad0-96fb-4473ff856506\") " pod="kube-system/cilium-pvr6v"
Feb  9 18:43:58.991114 kubelet[1521]: I0209 18:43:58.990997    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/813f93bd-5812-4ad0-96fb-4473ff856506-hostproc\") pod \"cilium-pvr6v\" (UID: \"813f93bd-5812-4ad0-96fb-4473ff856506\") " pod="kube-system/cilium-pvr6v"
Feb  9 18:43:58.991114 kubelet[1521]: I0209 18:43:58.991028    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/813f93bd-5812-4ad0-96fb-4473ff856506-lib-modules\") pod \"cilium-pvr6v\" (UID: \"813f93bd-5812-4ad0-96fb-4473ff856506\") " pod="kube-system/cilium-pvr6v"
Feb  9 18:43:58.991114 kubelet[1521]: I0209 18:43:58.991049    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/813f93bd-5812-4ad0-96fb-4473ff856506-xtables-lock\") pod \"cilium-pvr6v\" (UID: \"813f93bd-5812-4ad0-96fb-4473ff856506\") " pod="kube-system/cilium-pvr6v"
Feb  9 18:43:58.991114 kubelet[1521]: I0209 18:43:58.991079    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/813f93bd-5812-4ad0-96fb-4473ff856506-cilium-config-path\") pod \"cilium-pvr6v\" (UID: \"813f93bd-5812-4ad0-96fb-4473ff856506\") " pod="kube-system/cilium-pvr6v"
Feb  9 18:43:58.991114 kubelet[1521]: I0209 18:43:58.991101    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/813f93bd-5812-4ad0-96fb-4473ff856506-cni-path\") pod \"cilium-pvr6v\" (UID: \"813f93bd-5812-4ad0-96fb-4473ff856506\") " pod="kube-system/cilium-pvr6v"
Feb  9 18:43:58.991249 kubelet[1521]: I0209 18:43:58.991133    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/813f93bd-5812-4ad0-96fb-4473ff856506-etc-cni-netd\") pod \"cilium-pvr6v\" (UID: \"813f93bd-5812-4ad0-96fb-4473ff856506\") " pod="kube-system/cilium-pvr6v"
Feb  9 18:43:58.991249 kubelet[1521]: I0209 18:43:58.991154    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/813f93bd-5812-4ad0-96fb-4473ff856506-host-proc-sys-kernel\") pod \"cilium-pvr6v\" (UID: \"813f93bd-5812-4ad0-96fb-4473ff856506\") " pod="kube-system/cilium-pvr6v"
Feb  9 18:43:58.991249 kubelet[1521]: I0209 18:43:58.991176    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/813f93bd-5812-4ad0-96fb-4473ff856506-cilium-run\") pod \"cilium-pvr6v\" (UID: \"813f93bd-5812-4ad0-96fb-4473ff856506\") " pod="kube-system/cilium-pvr6v"
Feb  9 18:43:58.991249 kubelet[1521]: I0209 18:43:58.991196    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/813f93bd-5812-4ad0-96fb-4473ff856506-cilium-cgroup\") pod \"cilium-pvr6v\" (UID: \"813f93bd-5812-4ad0-96fb-4473ff856506\") " pod="kube-system/cilium-pvr6v"
Feb  9 18:43:58.991249 kubelet[1521]: I0209 18:43:58.991224    1521 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/813f93bd-5812-4ad0-96fb-4473ff856506-clustermesh-secrets\") pod \"cilium-pvr6v\" (UID: \"813f93bd-5812-4ad0-96fb-4473ff856506\") " pod="kube-system/cilium-pvr6v"
Feb  9 18:43:59.118997 env[1222]: time="2024-02-09T18:43:59.118902007Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:43:59.122450 env[1222]: time="2024-02-09T18:43:59.122420527Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:43:59.125733 env[1222]: time="2024-02-09T18:43:59.124852074Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:43:59.125733 env[1222]: time="2024-02-09T18:43:59.125070237Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\""
Feb  9 18:43:59.127181 env[1222]: time="2024-02-09T18:43:59.127149220Z" level=info msg="CreateContainer within sandbox \"7f6d4ddd776afc0aa0563db95512b45b765eea05f8926ca40c04eee74c53c13e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}"
Feb  9 18:43:59.134577 env[1222]: time="2024-02-09T18:43:59.134533503Z" level=info msg="CreateContainer within sandbox \"7f6d4ddd776afc0aa0563db95512b45b765eea05f8926ca40c04eee74c53c13e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"af073e71de76d18f3406ae7d74b9f9c982e7f2e90ee05372d5ff10783d033c93\""
Feb  9 18:43:59.135036 env[1222]: time="2024-02-09T18:43:59.135000188Z" level=info msg="StartContainer for \"af073e71de76d18f3406ae7d74b9f9c982e7f2e90ee05372d5ff10783d033c93\""
Feb  9 18:43:59.169623 kubelet[1521]: E0209 18:43:59.169584    1521 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:43:59.170213 env[1222]: time="2024-02-09T18:43:59.170179464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pvr6v,Uid:813f93bd-5812-4ad0-96fb-4473ff856506,Namespace:kube-system,Attempt:0,}"
Feb  9 18:43:59.239672 env[1222]: time="2024-02-09T18:43:59.239613124Z" level=info msg="StartContainer for \"af073e71de76d18f3406ae7d74b9f9c982e7f2e90ee05372d5ff10783d033c93\" returns successfully"
Feb  9 18:43:59.255890 env[1222]: time="2024-02-09T18:43:59.255775506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  9 18:43:59.255890 env[1222]: time="2024-02-09T18:43:59.255858026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  9 18:43:59.256068 env[1222]: time="2024-02-09T18:43:59.255872547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  9 18:43:59.256068 env[1222]: time="2024-02-09T18:43:59.256001188Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/051fc6faeea4c0181c389957831c4481654adfab4c513caac83af1bc6920de0f pid=3405 runtime=io.containerd.runc.v2
Feb  9 18:43:59.316818 env[1222]: time="2024-02-09T18:43:59.316745631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pvr6v,Uid:813f93bd-5812-4ad0-96fb-4473ff856506,Namespace:kube-system,Attempt:0,} returns sandbox id \"051fc6faeea4c0181c389957831c4481654adfab4c513caac83af1bc6920de0f\""
Feb  9 18:43:59.317441 kubelet[1521]: E0209 18:43:59.317411    1521 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:43:59.319368 env[1222]: time="2024-02-09T18:43:59.319332180Z" level=info msg="CreateContainer within sandbox \"051fc6faeea4c0181c389957831c4481654adfab4c513caac83af1bc6920de0f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Feb  9 18:43:59.330433 env[1222]: time="2024-02-09T18:43:59.330387304Z" level=info msg="CreateContainer within sandbox \"051fc6faeea4c0181c389957831c4481654adfab4c513caac83af1bc6920de0f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9150651a5502be1ad86f7e79fea6cde9dbe94648ac0d894d96a9b0b0d4d220d7\""
Feb  9 18:43:59.330941 env[1222]: time="2024-02-09T18:43:59.330911990Z" level=info msg="StartContainer for \"9150651a5502be1ad86f7e79fea6cde9dbe94648ac0d894d96a9b0b0d4d220d7\""
Feb  9 18:43:59.386306 env[1222]: time="2024-02-09T18:43:59.386188171Z" level=info msg="StartContainer for \"9150651a5502be1ad86f7e79fea6cde9dbe94648ac0d894d96a9b0b0d4d220d7\" returns successfully"
Feb  9 18:43:59.415119 env[1222]: time="2024-02-09T18:43:59.415072736Z" level=info msg="shim disconnected" id=9150651a5502be1ad86f7e79fea6cde9dbe94648ac0d894d96a9b0b0d4d220d7
Feb  9 18:43:59.415119 env[1222]: time="2024-02-09T18:43:59.415117696Z" level=warning msg="cleaning up after shim disconnected" id=9150651a5502be1ad86f7e79fea6cde9dbe94648ac0d894d96a9b0b0d4d220d7 namespace=k8s.io
Feb  9 18:43:59.415119 env[1222]: time="2024-02-09T18:43:59.415127216Z" level=info msg="cleaning up dead shim"
Feb  9 18:43:59.423315 env[1222]: time="2024-02-09T18:43:59.423270028Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:43:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3487 runtime=io.containerd.runc.v2\n"
Feb  9 18:43:59.592149 kubelet[1521]: E0209 18:43:59.592087    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:43:59.596455 kubelet[1521]: E0209 18:43:59.596430    1521 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb  9 18:43:59.706395 kubelet[1521]: I0209 18:43:59.706312    1521 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=e2231165-68d6-446c-a11f-8ed7a383b659 path="/var/lib/kubelet/pods/e2231165-68d6-446c-a11f-8ed7a383b659/volumes"
Feb  9 18:43:59.839037 kubelet[1521]: E0209 18:43:59.838971    1521 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:43:59.840441 kubelet[1521]: E0209 18:43:59.840408    1521 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:43:59.842482 env[1222]: time="2024-02-09T18:43:59.842448299Z" level=info msg="CreateContainer within sandbox \"051fc6faeea4c0181c389957831c4481654adfab4c513caac83af1bc6920de0f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Feb  9 18:43:59.846866 kubelet[1521]: I0209 18:43:59.846770    1521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-mpzx9" podStartSLOduration=-9.22337203300804e+09 pod.CreationTimestamp="2024-02-09 18:43:56 +0000 UTC" firstStartedPulling="2024-02-09 18:43:57.147637804 +0000 UTC m=+68.484985041" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:43:59.846735267 +0000 UTC m=+71.184082504" watchObservedRunningTime="2024-02-09 18:43:59.846736787 +0000 UTC m=+71.184083984"
Feb  9 18:43:59.851977 env[1222]: time="2024-02-09T18:43:59.851933526Z" level=info msg="CreateContainer within sandbox \"051fc6faeea4c0181c389957831c4481654adfab4c513caac83af1bc6920de0f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"664656282083349fa6106c3088141cc3a22321be2eaaa083df25bb355f1d748c\""
Feb  9 18:43:59.852495 env[1222]: time="2024-02-09T18:43:59.852455211Z" level=info msg="StartContainer for \"664656282083349fa6106c3088141cc3a22321be2eaaa083df25bb355f1d748c\""
Feb  9 18:43:59.901182 env[1222]: time="2024-02-09T18:43:59.901133199Z" level=info msg="StartContainer for \"664656282083349fa6106c3088141cc3a22321be2eaaa083df25bb355f1d748c\" returns successfully"
Feb  9 18:43:59.925734 env[1222]: time="2024-02-09T18:43:59.925676434Z" level=info msg="shim disconnected" id=664656282083349fa6106c3088141cc3a22321be2eaaa083df25bb355f1d748c
Feb  9 18:43:59.925734 env[1222]: time="2024-02-09T18:43:59.925715955Z" level=warning msg="cleaning up after shim disconnected" id=664656282083349fa6106c3088141cc3a22321be2eaaa083df25bb355f1d748c namespace=k8s.io
Feb  9 18:43:59.925734 env[1222]: time="2024-02-09T18:43:59.925725155Z" level=info msg="cleaning up dead shim"
Feb  9 18:43:59.931896 env[1222]: time="2024-02-09T18:43:59.931861384Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:43:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3548 runtime=io.containerd.runc.v2\n"
Feb  9 18:44:00.592275 kubelet[1521]: E0209 18:44:00.592225    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:44:00.844368 kubelet[1521]: E0209 18:44:00.843966    1521 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:44:00.844368 kubelet[1521]: E0209 18:44:00.844127    1521 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:44:00.845573 env[1222]: time="2024-02-09T18:44:00.845535976Z" level=info msg="CreateContainer within sandbox \"051fc6faeea4c0181c389957831c4481654adfab4c513caac83af1bc6920de0f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Feb  9 18:44:00.857139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2390801687.mount: Deactivated successfully.
Feb  9 18:44:00.862237 env[1222]: time="2024-02-09T18:44:00.862184719Z" level=info msg="CreateContainer within sandbox \"051fc6faeea4c0181c389957831c4481654adfab4c513caac83af1bc6920de0f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"714a42739a9c29a838040c293616a85e4beaeaa0a789d2acfa257a7f667997ce\""
Feb  9 18:44:00.862722 env[1222]: time="2024-02-09T18:44:00.862649564Z" level=info msg="StartContainer for \"714a42739a9c29a838040c293616a85e4beaeaa0a789d2acfa257a7f667997ce\""
Feb  9 18:44:00.918588 env[1222]: time="2024-02-09T18:44:00.918543379Z" level=info msg="StartContainer for \"714a42739a9c29a838040c293616a85e4beaeaa0a789d2acfa257a7f667997ce\" returns successfully"
Feb  9 18:44:00.936687 env[1222]: time="2024-02-09T18:44:00.936637498Z" level=info msg="shim disconnected" id=714a42739a9c29a838040c293616a85e4beaeaa0a789d2acfa257a7f667997ce
Feb  9 18:44:00.936962 env[1222]: time="2024-02-09T18:44:00.936942742Z" level=warning msg="cleaning up after shim disconnected" id=714a42739a9c29a838040c293616a85e4beaeaa0a789d2acfa257a7f667997ce namespace=k8s.io
Feb  9 18:44:00.937044 env[1222]: time="2024-02-09T18:44:00.937029943Z" level=info msg="cleaning up dead shim"
Feb  9 18:44:00.942984 env[1222]: time="2024-02-09T18:44:00.942956368Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:44:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3606 runtime=io.containerd.runc.v2\n"
Feb  9 18:44:01.593305 kubelet[1521]: E0209 18:44:01.593269    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:44:01.847691 kubelet[1521]: E0209 18:44:01.847349    1521 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:44:01.849409 env[1222]: time="2024-02-09T18:44:01.849372277Z" level=info msg="CreateContainer within sandbox \"051fc6faeea4c0181c389957831c4481654adfab4c513caac83af1bc6920de0f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Feb  9 18:44:01.863829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1545441739.mount: Deactivated successfully.
Feb  9 18:44:01.865857 env[1222]: time="2024-02-09T18:44:01.865810615Z" level=info msg="CreateContainer within sandbox \"051fc6faeea4c0181c389957831c4481654adfab4c513caac83af1bc6920de0f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a44f4408a4a350ac80217fa285adc6073f70f57b750b895b16410b7b5f326e24\""
Feb  9 18:44:01.866324 env[1222]: time="2024-02-09T18:44:01.866288740Z" level=info msg="StartContainer for \"a44f4408a4a350ac80217fa285adc6073f70f57b750b895b16410b7b5f326e24\""
Feb  9 18:44:01.927084 env[1222]: time="2024-02-09T18:44:01.927029955Z" level=info msg="StartContainer for \"a44f4408a4a350ac80217fa285adc6073f70f57b750b895b16410b7b5f326e24\" returns successfully"
Feb  9 18:44:01.945283 env[1222]: time="2024-02-09T18:44:01.945238991Z" level=info msg="shim disconnected" id=a44f4408a4a350ac80217fa285adc6073f70f57b750b895b16410b7b5f326e24
Feb  9 18:44:01.945283 env[1222]: time="2024-02-09T18:44:01.945282592Z" level=warning msg="cleaning up after shim disconnected" id=a44f4408a4a350ac80217fa285adc6073f70f57b750b895b16410b7b5f326e24 namespace=k8s.io
Feb  9 18:44:01.945283 env[1222]: time="2024-02-09T18:44:01.945291952Z" level=info msg="cleaning up dead shim"
Feb  9 18:44:01.952702 env[1222]: time="2024-02-09T18:44:01.952650351Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:44:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3660 runtime=io.containerd.runc.v2\n"
Feb  9 18:44:01.982124 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a44f4408a4a350ac80217fa285adc6073f70f57b750b895b16410b7b5f326e24-rootfs.mount: Deactivated successfully.
Feb  9 18:44:02.594411 kubelet[1521]: E0209 18:44:02.594367    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:44:02.851088 kubelet[1521]: E0209 18:44:02.851006    1521 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:44:02.853200 env[1222]: time="2024-02-09T18:44:02.853164330Z" level=info msg="CreateContainer within sandbox \"051fc6faeea4c0181c389957831c4481654adfab4c513caac83af1bc6920de0f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Feb  9 18:44:02.867966 env[1222]: time="2024-02-09T18:44:02.867905126Z" level=info msg="CreateContainer within sandbox \"051fc6faeea4c0181c389957831c4481654adfab4c513caac83af1bc6920de0f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4b03ebf89324c958d96e30774790a5ea56b98627763d9858bd8f1fa56377b503\""
Feb  9 18:44:02.868661 env[1222]: time="2024-02-09T18:44:02.868608933Z" level=info msg="StartContainer for \"4b03ebf89324c958d96e30774790a5ea56b98627763d9858bd8f1fa56377b503\""
Feb  9 18:44:02.918983 env[1222]: time="2024-02-09T18:44:02.918929426Z" level=info msg="StartContainer for \"4b03ebf89324c958d96e30774790a5ea56b98627763d9858bd8f1fa56377b503\" returns successfully"
Feb  9 18:44:03.152873 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce)))
Feb  9 18:44:03.594660 kubelet[1521]: E0209 18:44:03.594619    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:44:03.855376 kubelet[1521]: E0209 18:44:03.855230    1521 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:44:03.868358 kubelet[1521]: I0209 18:44:03.868306    1521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-pvr6v" podStartSLOduration=5.868276943 pod.CreationTimestamp="2024-02-09 18:43:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:44:03.86800402 +0000 UTC m=+75.205351257" watchObservedRunningTime="2024-02-09 18:44:03.868276943 +0000 UTC m=+75.205624140"
Feb  9 18:44:04.369513 kubelet[1521]: I0209 18:44:04.369490    1521 setters.go:548] "Node became not ready" node="10.0.0.123" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 18:44:04.369440203 +0000 UTC m=+75.706787440 LastTransitionTime:2024-02-09 18:44:04.369440203 +0000 UTC m=+75.706787440 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized}
Feb  9 18:44:04.594778 kubelet[1521]: E0209 18:44:04.594738    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:44:04.857331 kubelet[1521]: E0209 18:44:04.857303    1521 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:44:05.595439 kubelet[1521]: E0209 18:44:05.595383    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:44:05.822269 systemd-networkd[1103]: lxc_health: Link UP
Feb  9 18:44:05.833849 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready
Feb  9 18:44:05.835313 systemd-networkd[1103]: lxc_health: Gained carrier
Feb  9 18:44:05.858669 kubelet[1521]: E0209 18:44:05.858583    1521 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:44:06.596290 kubelet[1521]: E0209 18:44:06.596239    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:44:06.953266 systemd-networkd[1103]: lxc_health: Gained IPv6LL
Feb  9 18:44:07.171628 kubelet[1521]: E0209 18:44:07.171599    1521 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:44:07.361844 systemd[1]: run-containerd-runc-k8s.io-4b03ebf89324c958d96e30774790a5ea56b98627763d9858bd8f1fa56377b503-runc.bOyRKH.mount: Deactivated successfully.
Feb  9 18:44:07.596916 kubelet[1521]: E0209 18:44:07.596872    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:44:07.862007 kubelet[1521]: E0209 18:44:07.861980    1521 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:44:08.597776 kubelet[1521]: E0209 18:44:08.597743    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:44:08.863575 kubelet[1521]: E0209 18:44:08.863482    1521 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:44:09.537774 kubelet[1521]: E0209 18:44:09.537725    1521 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:44:09.598355 kubelet[1521]: E0209 18:44:09.598326    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:44:10.599481 kubelet[1521]: E0209 18:44:10.599446    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:44:11.600597 kubelet[1521]: E0209 18:44:11.600541    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:44:12.601588 kubelet[1521]: E0209 18:44:12.601544    1521 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"