Feb 12 19:11:24.751392 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 12 19:11:24.751413 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Feb 12 18:07:00 -00 2024 Feb 12 19:11:24.751421 kernel: efi: EFI v2.70 by EDK II Feb 12 19:11:24.751427 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Feb 12 19:11:24.751432 kernel: random: crng init done Feb 12 19:11:24.751438 kernel: ACPI: Early table checksum verification disabled Feb 12 19:11:24.751445 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Feb 12 19:11:24.751452 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 12 19:11:24.751458 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:11:24.751464 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:11:24.751469 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:11:24.751475 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:11:24.751481 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:11:24.751487 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:11:24.751495 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:11:24.751501 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:11:24.751508 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:11:24.751514 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 12 19:11:24.751520 kernel: NUMA: Failed to initialise from firmware Feb 12 19:11:24.751526 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 12 19:11:24.751532 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Feb 12 19:11:24.751538 kernel: Zone ranges: Feb 12 19:11:24.751544 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 12 19:11:24.751551 kernel: DMA32 empty Feb 12 19:11:24.751557 kernel: Normal empty Feb 12 19:11:24.751563 kernel: Movable zone start for each node Feb 12 19:11:24.751569 kernel: Early memory node ranges Feb 12 19:11:24.751575 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Feb 12 19:11:24.751581 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Feb 12 19:11:24.751587 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Feb 12 19:11:24.751593 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Feb 12 19:11:24.751599 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Feb 12 19:11:24.751605 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Feb 12 19:11:24.751611 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Feb 12 19:11:24.751617 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 12 19:11:24.751624 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 12 19:11:24.751631 kernel: psci: probing for conduit method from ACPI. Feb 12 19:11:24.751636 kernel: psci: PSCIv1.1 detected in firmware. Feb 12 19:11:24.751642 kernel: psci: Using standard PSCI v0.2 function IDs Feb 12 19:11:24.751648 kernel: psci: Trusted OS migration not required Feb 12 19:11:24.751657 kernel: psci: SMC Calling Convention v1.1 Feb 12 19:11:24.751663 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 12 19:11:24.751671 kernel: ACPI: SRAT not present Feb 12 19:11:24.751678 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 12 19:11:24.751684 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 12 19:11:24.751691 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 12 19:11:24.751697 kernel: Detected PIPT I-cache on CPU0 Feb 12 19:11:24.751704 kernel: CPU features: detected: GIC system register CPU interface Feb 12 19:11:24.751710 kernel: CPU features: detected: Hardware dirty bit management Feb 12 19:11:24.751717 kernel: CPU features: detected: Spectre-v4 Feb 12 19:11:24.751723 kernel: CPU features: detected: Spectre-BHB Feb 12 19:11:24.751731 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 12 19:11:24.751737 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 12 19:11:24.751744 kernel: CPU features: detected: ARM erratum 1418040 Feb 12 19:11:24.751750 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 12 19:11:24.751756 kernel: Policy zone: DMA Feb 12 19:11:24.751764 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=0a07ee1673be713cb46dc1305004c8854c4690dc8835a87e3bc71aa6c6a62e40 Feb 12 19:11:24.751771 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 19:11:24.751777 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 12 19:11:24.751784 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 19:11:24.751790 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 19:11:24.751797 kernel: Memory: 2459152K/2572288K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 113136K reserved, 0K cma-reserved) Feb 12 19:11:24.751804 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 12 19:11:24.751811 kernel: trace event string verifier disabled Feb 12 19:11:24.751817 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 12 19:11:24.751824 kernel: rcu: RCU event tracing is enabled. Feb 12 19:11:24.751831 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 12 19:11:24.751837 kernel: Trampoline variant of Tasks RCU enabled. Feb 12 19:11:24.751844 kernel: Tracing variant of Tasks RCU enabled. Feb 12 19:11:24.751850 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 19:11:24.751857 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 12 19:11:24.751863 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 12 19:11:24.751870 kernel: GICv3: 256 SPIs implemented Feb 12 19:11:24.751877 kernel: GICv3: 0 Extended SPIs implemented Feb 12 19:11:24.751883 kernel: GICv3: Distributor has no Range Selector support Feb 12 19:11:24.751890 kernel: Root IRQ handler: gic_handle_irq Feb 12 19:11:24.751903 kernel: GICv3: 16 PPIs implemented Feb 12 19:11:24.751910 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 12 19:11:24.751917 kernel: ACPI: SRAT not present Feb 12 19:11:24.751923 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 12 19:11:24.751930 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Feb 12 19:11:24.751936 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Feb 12 19:11:24.751943 kernel: GICv3: using LPI property table @0x00000000400d0000 Feb 12 19:11:24.751950 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Feb 12 19:11:24.751956 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:11:24.751964 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 12 19:11:24.751971 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 12 19:11:24.751978 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 12 19:11:24.751984 kernel: arm-pv: using stolen time PV Feb 12 19:11:24.751991 kernel: Console: colour dummy device 80x25 Feb 12 19:11:24.751998 kernel: ACPI: Core revision 20210730 Feb 12 19:11:24.752004 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 12 19:11:24.752011 kernel: pid_max: default: 32768 minimum: 301 Feb 12 19:11:24.752018 kernel: LSM: Security Framework initializing Feb 12 19:11:24.752024 kernel: SELinux: Initializing. Feb 12 19:11:24.752032 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 19:11:24.752039 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 19:11:24.752045 kernel: rcu: Hierarchical SRCU implementation. Feb 12 19:11:24.752052 kernel: Platform MSI: ITS@0x8080000 domain created Feb 12 19:11:24.752058 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 12 19:11:24.752065 kernel: Remapping and enabling EFI services. Feb 12 19:11:24.752071 kernel: smp: Bringing up secondary CPUs ... Feb 12 19:11:24.752078 kernel: Detected PIPT I-cache on CPU1 Feb 12 19:11:24.752085 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 12 19:11:24.752093 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Feb 12 19:11:24.752099 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:11:24.752106 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 12 19:11:24.752113 kernel: Detected PIPT I-cache on CPU2 Feb 12 19:11:24.752119 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 12 19:11:24.752126 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Feb 12 19:11:24.752133 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:11:24.752139 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 12 19:11:24.752145 kernel: Detected PIPT I-cache on CPU3 Feb 12 19:11:24.752153 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 12 19:11:24.752160 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Feb 12 19:11:24.752166 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:11:24.752173 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 12 19:11:24.752179 kernel: smp: Brought up 1 node, 4 CPUs Feb 12 19:11:24.752190 kernel: SMP: Total of 4 processors activated. Feb 12 19:11:24.752198 kernel: CPU features: detected: 32-bit EL0 Support Feb 12 19:11:24.752205 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 12 19:11:24.752212 kernel: CPU features: detected: Common not Private translations Feb 12 19:11:24.752219 kernel: CPU features: detected: CRC32 instructions Feb 12 19:11:24.752226 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 12 19:11:24.752233 kernel: CPU features: detected: LSE atomic instructions Feb 12 19:11:24.752241 kernel: CPU features: detected: Privileged Access Never Feb 12 19:11:24.752248 kernel: CPU features: detected: RAS Extension Support Feb 12 19:11:24.752255 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 12 19:11:24.752262 kernel: CPU: All CPU(s) started at EL1 Feb 12 19:11:24.752269 kernel: alternatives: patching kernel code Feb 12 19:11:24.752277 kernel: devtmpfs: initialized Feb 12 19:11:24.752284 kernel: KASLR enabled Feb 12 19:11:24.752292 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 19:11:24.752299 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 12 19:11:24.752306 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 19:11:24.752313 kernel: SMBIOS 3.0.0 present. Feb 12 19:11:24.752328 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Feb 12 19:11:24.752335 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 19:11:24.752342 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 12 19:11:24.752351 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 12 19:11:24.752358 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 12 19:11:24.752365 kernel: audit: initializing netlink subsys (disabled) Feb 12 19:11:24.752373 kernel: audit: type=2000 audit(0.035:1): state=initialized audit_enabled=0 res=1 Feb 12 19:11:24.752380 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 19:11:24.752387 kernel: cpuidle: using governor menu Feb 12 19:11:24.752393 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 12 19:11:24.752400 kernel: ASID allocator initialised with 32768 entries Feb 12 19:11:24.752407 kernel: ACPI: bus type PCI registered Feb 12 19:11:24.752416 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 19:11:24.752423 kernel: Serial: AMBA PL011 UART driver Feb 12 19:11:24.752430 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 12 19:11:24.752437 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 12 19:11:24.752444 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 19:11:24.752451 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 12 19:11:24.752458 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 19:11:24.752465 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 12 19:11:24.752472 kernel: ACPI: Added _OSI(Module Device) Feb 12 19:11:24.752480 kernel: ACPI: Added _OSI(Processor Device) Feb 12 19:11:24.752487 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 19:11:24.752494 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 19:11:24.752501 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 19:11:24.752508 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 19:11:24.752515 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 19:11:24.752522 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 19:11:24.752529 kernel: ACPI: Interpreter enabled Feb 12 19:11:24.752536 kernel: ACPI: Using GIC for interrupt routing Feb 12 19:11:24.752542 kernel: ACPI: MCFG table detected, 1 entries Feb 12 19:11:24.752551 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 12 19:11:24.752558 kernel: printk: console [ttyAMA0] enabled Feb 12 19:11:24.752565 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 12 19:11:24.752688 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 12 19:11:24.752758 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 12 19:11:24.752822 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 12 19:11:24.752885 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 12 19:11:24.752962 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 12 19:11:24.752972 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 12 19:11:24.752979 kernel: PCI host bridge to bus 0000:00 Feb 12 19:11:24.753056 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 12 19:11:24.753115 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 12 19:11:24.753172 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 12 19:11:24.753228 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 12 19:11:24.753306 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 12 19:11:24.753402 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 12 19:11:24.753468 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 12 19:11:24.753534 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 12 19:11:24.753598 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 12 19:11:24.753664 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 12 19:11:24.753729 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 12 19:11:24.753798 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 12 19:11:24.753858 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 12 19:11:24.753924 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 12 19:11:24.753983 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 12 19:11:24.753992 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 12 19:11:24.754000 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 12 19:11:24.754007 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 12 19:11:24.754016 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 12 19:11:24.754023 kernel: iommu: Default domain type: Translated Feb 12 19:11:24.754030 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 12 19:11:24.754037 kernel: vgaarb: loaded Feb 12 19:11:24.754044 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 19:11:24.754051 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it> Feb 12 19:11:24.754057 kernel: PTP clock support registered Feb 12 19:11:24.754064 kernel: Registered efivars operations Feb 12 19:11:24.754071 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 12 19:11:24.754079 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 19:11:24.754086 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 19:11:24.754093 kernel: pnp: PnP ACPI init Feb 12 19:11:24.754162 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 12 19:11:24.754173 kernel: pnp: PnP ACPI: found 1 devices Feb 12 19:11:24.754180 kernel: NET: Registered PF_INET protocol family Feb 12 19:11:24.754187 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 12 19:11:24.754194 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 12 19:11:24.754203 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 19:11:24.754210 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 19:11:24.754217 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 12 19:11:24.754224 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 12 19:11:24.754231 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 19:11:24.754239 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 19:11:24.754246 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 19:11:24.754253 kernel: PCI: CLS 0 bytes, default 64 Feb 12 19:11:24.754260 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 12 19:11:24.754269 kernel: kvm [1]: HYP mode not available Feb 12 19:11:24.754276 kernel: Initialise system trusted keyrings Feb 12 19:11:24.754283 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 12 19:11:24.754290 kernel: Key type asymmetric registered Feb 12 19:11:24.754297 kernel: Asymmetric key parser 'x509' registered Feb 12 19:11:24.754304 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 19:11:24.754311 kernel: io scheduler mq-deadline registered Feb 12 19:11:24.754327 kernel: io scheduler kyber registered Feb 12 19:11:24.754335 kernel: io scheduler bfq registered Feb 12 19:11:24.754344 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 12 19:11:24.754351 kernel: ACPI: button: Power Button [PWRB] Feb 12 19:11:24.754358 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 12 19:11:24.754430 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 12 19:11:24.754440 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 19:11:24.754447 kernel: thunder_xcv, ver 1.0 Feb 12 19:11:24.754454 kernel: thunder_bgx, ver 1.0 Feb 12 19:11:24.754461 kernel: nicpf, ver 1.0 Feb 12 19:11:24.754468 kernel: nicvf, ver 1.0 Feb 12 19:11:24.754547 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 12 19:11:24.754614 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-12T19:11:24 UTC (1707765084) Feb 12 19:11:24.754623 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 12 19:11:24.754630 kernel: NET: Registered PF_INET6 protocol family Feb 12 19:11:24.754637 kernel: Segment Routing with IPv6 Feb 12 19:11:24.754644 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 19:11:24.754651 kernel: NET: Registered PF_PACKET protocol family Feb 12 19:11:24.754658 kernel: Key type dns_resolver registered Feb 12 19:11:24.754665 kernel: registered taskstats version 1 Feb 12 19:11:24.754674 kernel: Loading compiled-in X.509 certificates Feb 12 19:11:24.754682 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: c8c3faa6fd8ae0112832fff0e3d0e58448a7eb6c' Feb 12 19:11:24.754689 kernel: Key type .fscrypt registered Feb 12 19:11:24.754695 kernel: Key type fscrypt-provisioning registered Feb 12 19:11:24.754702 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 19:11:24.754710 kernel: ima: Allocated hash algorithm: sha1 Feb 12 19:11:24.754717 kernel: ima: No architecture policies found Feb 12 19:11:24.754724 kernel: Freeing unused kernel memory: 34688K Feb 12 19:11:24.754732 kernel: Run /init as init process Feb 12 19:11:24.754739 kernel: with arguments: Feb 12 19:11:24.754746 kernel: /init Feb 12 19:11:24.754753 kernel: with environment: Feb 12 19:11:24.754760 kernel: HOME=/ Feb 12 19:11:24.754767 kernel: TERM=linux Feb 12 19:11:24.754774 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 19:11:24.754783 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:11:24.754794 systemd[1]: Detected virtualization kvm. Feb 12 19:11:24.754801 systemd[1]: Detected architecture arm64. Feb 12 19:11:24.754809 systemd[1]: Running in initrd. Feb 12 19:11:24.754816 systemd[1]: No hostname configured, using default hostname. Feb 12 19:11:24.754824 systemd[1]: Hostname set to <localhost>. Feb 12 19:11:24.754837 systemd[1]: Initializing machine ID from VM UUID. Feb 12 19:11:24.754845 systemd[1]: Queued start job for default target initrd.target. Feb 12 19:11:24.754853 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:11:24.754862 systemd[1]: Reached target cryptsetup.target. Feb 12 19:11:24.754869 systemd[1]: Reached target paths.target. Feb 12 19:11:24.754877 systemd[1]: Reached target slices.target. Feb 12 19:11:24.754884 systemd[1]: Reached target swap.target. Feb 12 19:11:24.754892 systemd[1]: Reached target timers.target. Feb 12 19:11:24.754907 systemd[1]: Listening on iscsid.socket. Feb 12 19:11:24.754914 systemd[1]: Listening on iscsiuio.socket. Feb 12 19:11:24.754922 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 19:11:24.754931 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 19:11:24.754939 systemd[1]: Listening on systemd-journald.socket. Feb 12 19:11:24.754946 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:11:24.754954 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:11:24.754961 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:11:24.754969 systemd[1]: Reached target sockets.target. Feb 12 19:11:24.754976 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:11:24.754984 systemd[1]: Finished network-cleanup.service. Feb 12 19:11:24.754992 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 19:11:24.755001 systemd[1]: Starting systemd-journald.service... Feb 12 19:11:24.755009 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:11:24.755016 systemd[1]: Starting systemd-resolved.service... Feb 12 19:11:24.755024 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 19:11:24.755032 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:11:24.755039 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 19:11:24.755047 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:11:24.755054 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:11:24.755063 kernel: audit: type=1130 audit(1707765084.752:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:24.755071 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 19:11:24.755083 systemd-journald[289]: Journal started Feb 12 19:11:24.755124 systemd-journald[289]: Runtime Journal (/run/log/journal/c7c32158855845fbb4382fc036a2ce68) is 6.0M, max 48.7M, 42.6M free. Feb 12 19:11:24.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:24.739844 systemd-modules-load[290]: Inserted module 'overlay' Feb 12 19:11:24.760156 kernel: audit: type=1130 audit(1707765084.755:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:24.760181 systemd[1]: Started systemd-journald.service. Feb 12 19:11:24.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:24.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:24.761740 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 19:11:24.773030 kernel: audit: type=1130 audit(1707765084.760:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:24.773053 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 19:11:24.773069 kernel: Bridge firewalling registered Feb 12 19:11:24.769544 systemd-modules-load[290]: Inserted module 'br_netfilter' Feb 12 19:11:24.782340 kernel: SCSI subsystem initialized Feb 12 19:11:24.783834 systemd-resolved[291]: Positive Trust Anchors: Feb 12 19:11:24.783850 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:11:24.783878 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:11:24.788832 systemd-resolved[291]: Defaulting to hostname 'linux'. Feb 12 19:11:24.795718 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 19:11:24.795739 kernel: audit: type=1130 audit(1707765084.792:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:24.795750 kernel: device-mapper: uevent: version 1.0.3 Feb 12 19:11:24.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:24.790286 systemd[1]: Started systemd-resolved.service. Feb 12 19:11:24.797705 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 19:11:24.793066 systemd[1]: Reached target nss-lookup.target. Feb 12 19:11:24.797865 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 19:11:24.801438 kernel: audit: type=1130 audit(1707765084.799:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:24.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:24.799959 systemd[1]: Starting dracut-cmdline.service... Feb 12 19:11:24.804252 systemd-modules-load[290]: Inserted module 'dm_multipath' Feb 12 19:11:24.805112 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:11:24.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:24.806919 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:11:24.809797 kernel: audit: type=1130 audit(1707765084.805:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:24.813700 dracut-cmdline[308]: dracut-dracut-053 Feb 12 19:11:24.816158 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=0a07ee1673be713cb46dc1305004c8854c4690dc8835a87e3bc71aa6c6a62e40 Feb 12 19:11:24.819776 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:11:24.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:24.823343 kernel: audit: type=1130 audit(1707765084.820:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:24.884343 kernel: Loading iSCSI transport class v2.0-870. Feb 12 19:11:24.894345 kernel: iscsi: registered transport (tcp) Feb 12 19:11:24.907607 kernel: iscsi: registered transport (qla4xxx) Feb 12 19:11:24.907627 kernel: QLogic iSCSI HBA Driver Feb 12 19:11:24.948007 systemd[1]: Finished dracut-cmdline.service. Feb 12 19:11:24.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:24.949575 systemd[1]: Starting dracut-pre-udev.service... Feb 12 19:11:24.952070 kernel: audit: type=1130 audit(1707765084.948:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:24.997345 kernel: raid6: neonx8 gen() 13875 MB/s Feb 12 19:11:25.013334 kernel: raid6: neonx8 xor() 10826 MB/s Feb 12 19:11:25.030335 kernel: raid6: neonx4 gen() 13530 MB/s Feb 12 19:11:25.047332 kernel: raid6: neonx4 xor() 11239 MB/s Feb 12 19:11:25.064328 kernel: raid6: neonx2 gen() 12987 MB/s Feb 12 19:11:25.081328 kernel: raid6: neonx2 xor() 10259 MB/s Feb 12 19:11:25.098339 kernel: raid6: neonx1 gen() 10492 MB/s Feb 12 19:11:25.115328 kernel: raid6: neonx1 xor() 8783 MB/s Feb 12 19:11:25.132331 kernel: raid6: int64x8 gen() 6292 MB/s Feb 12 19:11:25.149335 kernel: raid6: int64x8 xor() 3544 MB/s Feb 12 19:11:25.166336 kernel: raid6: int64x4 gen() 7217 MB/s Feb 12 19:11:25.183353 kernel: raid6: int64x4 xor() 3852 MB/s Feb 12 19:11:25.200348 kernel: raid6: int64x2 gen() 6149 MB/s Feb 12 19:11:25.217353 kernel: raid6: int64x2 xor() 3317 MB/s Feb 12 19:11:25.234347 kernel: raid6: int64x1 gen() 5044 MB/s Feb 12 19:11:25.251547 kernel: raid6: int64x1 xor() 2644 MB/s Feb 12 19:11:25.251591 kernel: raid6: using algorithm neonx8 gen() 13875 MB/s Feb 12 19:11:25.251601 kernel: raid6: .... xor() 10826 MB/s, rmw enabled Feb 12 19:11:25.251610 kernel: raid6: using neon recovery algorithm Feb 12 19:11:25.262579 kernel: xor: measuring software checksum speed Feb 12 19:11:25.262615 kernel: 8regs : 17297 MB/sec Feb 12 19:11:25.263415 kernel: 32regs : 20749 MB/sec Feb 12 19:11:25.264578 kernel: arm64_neon : 27939 MB/sec Feb 12 19:11:25.264599 kernel: xor: using function: arm64_neon (27939 MB/sec) Feb 12 19:11:25.318358 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 12 19:11:25.332234 systemd[1]: Finished dracut-pre-udev.service. Feb 12 19:11:25.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:25.334000 audit: BPF prog-id=7 op=LOAD Feb 12 19:11:25.335343 kernel: audit: type=1130 audit(1707765085.332:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:25.334000 audit: BPF prog-id=8 op=LOAD Feb 12 19:11:25.335713 systemd[1]: Starting systemd-udevd.service... Feb 12 19:11:25.369082 systemd-udevd[491]: Using default interface naming scheme 'v252'. Feb 12 19:11:25.372446 systemd[1]: Started systemd-udevd.service. Feb 12 19:11:25.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:25.374394 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 19:11:25.387522 dracut-pre-trigger[497]: rd.md=0: removing MD RAID activation Feb 12 19:11:25.418208 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 19:11:25.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:25.419890 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:11:25.455600 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:11:25.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:25.493960 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 12 19:11:25.496500 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 12 19:11:25.496533 kernel: GPT:9289727 != 19775487 Feb 12 19:11:25.496543 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 12 19:11:25.497832 kernel: GPT:9289727 != 19775487 Feb 12 19:11:25.497862 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 12 19:11:25.497873 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:11:25.515345 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (551) Feb 12 19:11:25.517903 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 19:11:25.524417 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 19:11:25.525170 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 19:11:25.529193 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 19:11:25.534371 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:11:25.535828 systemd[1]: Starting disk-uuid.service... Feb 12 19:11:25.542239 disk-uuid[563]: Primary Header is updated. Feb 12 19:11:25.542239 disk-uuid[563]: Secondary Entries is updated. Feb 12 19:11:25.542239 disk-uuid[563]: Secondary Header is updated. Feb 12 19:11:25.544653 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:11:26.555755 disk-uuid[564]: The operation has completed successfully. Feb 12 19:11:26.556783 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:11:26.578566 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 19:11:26.578661 systemd[1]: Finished disk-uuid.service. Feb 12 19:11:26.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:26.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:26.582613 systemd[1]: Starting verity-setup.service... Feb 12 19:11:26.599769 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 12 19:11:26.629519 systemd[1]: Found device dev-mapper-usr.device. Feb 12 19:11:26.631713 systemd[1]: Mounting sysusr-usr.mount... Feb 12 19:11:26.633376 systemd[1]: Finished verity-setup.service. Feb 12 19:11:26.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:26.681357 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 19:11:26.681475 systemd[1]: Mounted sysusr-usr.mount. Feb 12 19:11:26.682196 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 19:11:26.682906 systemd[1]: Starting ignition-setup.service... Feb 12 19:11:26.684797 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 19:11:26.690812 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 12 19:11:26.690845 kernel: BTRFS info (device vda6): using free space tree Feb 12 19:11:26.690855 kernel: BTRFS info (device vda6): has skinny extents Feb 12 19:11:26.700060 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 19:11:26.705703 systemd[1]: Finished ignition-setup.service. Feb 12 19:11:26.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:26.707128 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 19:11:26.769179 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 19:11:26.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:26.770000 audit: BPF prog-id=9 op=LOAD Feb 12 19:11:26.771391 systemd[1]: Starting systemd-networkd.service... Feb 12 19:11:26.797023 ignition[646]: Ignition 2.14.0 Feb 12 19:11:26.797033 ignition[646]: Stage: fetch-offline Feb 12 19:11:26.797069 ignition[646]: no configs at "/usr/lib/ignition/base.d" Feb 12 19:11:26.797078 ignition[646]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:11:26.797211 ignition[646]: parsed url from cmdline: "" Feb 12 19:11:26.797214 ignition[646]: no config URL provided Feb 12 19:11:26.797220 ignition[646]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:11:26.797228 ignition[646]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:11:26.801757 systemd-networkd[738]: lo: Link UP Feb 12 19:11:26.797244 ignition[646]: op(1): [started] loading QEMU firmware config module Feb 12 19:11:26.801760 systemd-networkd[738]: lo: Gained carrier Feb 12 19:11:26.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:26.797249 ignition[646]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 12 19:11:26.802410 systemd-networkd[738]: Enumeration completed Feb 12 19:11:26.804962 ignition[646]: op(1): [finished] loading QEMU firmware config module Feb 12 19:11:26.802599 systemd-networkd[738]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:11:26.803096 systemd[1]: Started systemd-networkd.service. Feb 12 19:11:26.804295 systemd[1]: Reached target network.target. Feb 12 19:11:26.806305 systemd[1]: Starting iscsiuio.service... Feb 12 19:11:26.806575 systemd-networkd[738]: eth0: Link UP Feb 12 19:11:26.806578 systemd-networkd[738]: eth0: Gained carrier Feb 12 19:11:26.817621 systemd[1]: Started iscsiuio.service. Feb 12 19:11:26.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:26.819424 systemd[1]: Starting iscsid.service... Feb 12 19:11:26.822891 iscsid[745]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:11:26.822891 iscsid[745]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.<reversed domain name>[:identifier]. Feb 12 19:11:26.822891 iscsid[745]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 19:11:26.822891 iscsid[745]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 19:11:26.822891 iscsid[745]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:11:26.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:26.833925 iscsid[745]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 19:11:26.826638 systemd[1]: Started iscsid.service. Feb 12 19:11:26.830072 systemd-networkd[738]: eth0: DHCPv4 address 10.0.0.30/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 12 19:11:26.830232 systemd[1]: Starting dracut-initqueue.service... Feb 12 19:11:26.841199 systemd[1]: Finished dracut-initqueue.service. Feb 12 19:11:26.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:26.842190 systemd[1]: Reached target remote-fs-pre.target. Feb 12 19:11:26.843607 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:11:26.845052 systemd[1]: Reached target remote-fs.target. Feb 12 19:11:26.847260 systemd[1]: Starting dracut-pre-mount.service... Feb 12 19:11:26.847299 ignition[646]: parsing config with SHA512: 9d205a53da464f6f6f98ef946bdb0d5202754c1cced6ca070d4e154fc68b65692388b9fd61d047465d43d62b783c63a18d126eb0f85ddeef239569919d15efc0 Feb 12 19:11:26.855312 systemd[1]: Finished dracut-pre-mount.service. Feb 12 19:11:26.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:26.870577 unknown[646]: fetched base config from "system" Feb 12 19:11:26.870587 unknown[646]: fetched user config from "qemu" Feb 12 19:11:26.871037 ignition[646]: fetch-offline: fetch-offline passed Feb 12 19:11:26.871100 ignition[646]: Ignition finished successfully Feb 12 19:11:26.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:26.872128 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 19:11:26.873081 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 12 19:11:26.873839 systemd[1]: Starting ignition-kargs.service... Feb 12 19:11:26.882730 ignition[761]: Ignition 2.14.0 Feb 12 19:11:26.882739 ignition[761]: Stage: kargs Feb 12 19:11:26.882831 ignition[761]: no configs at "/usr/lib/ignition/base.d" Feb 12 19:11:26.882841 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:11:26.883770 ignition[761]: kargs: kargs passed Feb 12 19:11:26.885384 systemd[1]: Finished ignition-kargs.service. Feb 12 19:11:26.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:26.883813 ignition[761]: Ignition finished successfully Feb 12 19:11:26.887116 systemd[1]: Starting ignition-disks.service... Feb 12 19:11:26.893601 ignition[767]: Ignition 2.14.0 Feb 12 19:11:26.893611 ignition[767]: Stage: disks Feb 12 19:11:26.893705 ignition[767]: no configs at "/usr/lib/ignition/base.d" Feb 12 19:11:26.893715 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:11:26.895576 systemd[1]: Finished ignition-disks.service. Feb 12 19:11:26.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:26.894719 ignition[767]: disks: disks passed Feb 12 19:11:26.896898 systemd[1]: Reached target initrd-root-device.target. Feb 12 19:11:26.894768 ignition[767]: Ignition finished successfully Feb 12 19:11:26.897949 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:11:26.898956 systemd[1]: Reached target local-fs.target. Feb 12 19:11:26.900055 systemd[1]: Reached target sysinit.target. Feb 12 19:11:26.901068 systemd[1]: Reached target basic.target. Feb 12 19:11:26.902983 systemd[1]: Starting systemd-fsck-root.service... Feb 12 19:11:26.914341 systemd-fsck[775]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 12 19:11:26.917666 systemd[1]: Finished systemd-fsck-root.service. Feb 12 19:11:26.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:26.919204 systemd[1]: Mounting sysroot.mount... Feb 12 19:11:26.924349 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 19:11:26.924778 systemd[1]: Mounted sysroot.mount. Feb 12 19:11:26.925475 systemd[1]: Reached target initrd-root-fs.target. Feb 12 19:11:26.927772 systemd[1]: Mounting sysroot-usr.mount... Feb 12 19:11:26.928554 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 12 19:11:26.928590 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 19:11:26.928611 systemd[1]: Reached target ignition-diskful.target. Feb 12 19:11:26.930415 systemd[1]: Mounted sysroot-usr.mount. Feb 12 19:11:26.932752 systemd[1]: Starting initrd-setup-root.service... Feb 12 19:11:26.936999 initrd-setup-root[785]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 19:11:26.941114 initrd-setup-root[793]: cut: /sysroot/etc/group: No such file or directory Feb 12 19:11:26.946173 initrd-setup-root[801]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 19:11:26.949985 initrd-setup-root[809]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 19:11:26.980039 systemd[1]: Finished initrd-setup-root.service. Feb 12 19:11:26.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:26.981557 systemd[1]: Starting ignition-mount.service... Feb 12 19:11:26.982694 systemd[1]: Starting sysroot-boot.service... Feb 12 19:11:26.987227 bash[826]: umount: /sysroot/usr/share/oem: not mounted. Feb 12 19:11:26.995287 ignition[828]: INFO : Ignition 2.14.0 Feb 12 19:11:26.995287 ignition[828]: INFO : Stage: mount Feb 12 19:11:26.997381 ignition[828]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 19:11:26.997381 ignition[828]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:11:26.997381 ignition[828]: INFO : mount: mount passed Feb 12 19:11:26.997381 ignition[828]: INFO : Ignition finished successfully Feb 12 19:11:26.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:26.998494 systemd[1]: Finished ignition-mount.service. Feb 12 19:11:27.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:27.001006 systemd[1]: Finished sysroot-boot.service. Feb 12 19:11:27.642601 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 19:11:27.652531 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (836) Feb 12 19:11:27.652573 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 12 19:11:27.652590 kernel: BTRFS info (device vda6): using free space tree Feb 12 19:11:27.653448 kernel: BTRFS info (device vda6): has skinny extents Feb 12 19:11:27.657474 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 19:11:27.659279 systemd[1]: Starting ignition-files.service... Feb 12 19:11:27.673659 ignition[856]: INFO : Ignition 2.14.0 Feb 12 19:11:27.673659 ignition[856]: INFO : Stage: files Feb 12 19:11:27.674984 ignition[856]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 19:11:27.674984 ignition[856]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:11:27.674984 ignition[856]: DEBUG : files: compiled without relabeling support, skipping Feb 12 19:11:27.677542 ignition[856]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 19:11:27.677542 ignition[856]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 19:11:27.682568 ignition[856]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 19:11:27.683581 ignition[856]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 19:11:27.684750 unknown[856]: wrote ssh authorized keys file for user: core Feb 12 19:11:27.685606 ignition[856]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 19:11:27.685606 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 19:11:27.685606 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 19:11:27.685606 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 12 19:11:27.685606 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Feb 12 19:11:28.005146 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 19:11:28.206962 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Feb 12 19:11:28.209114 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 12 19:11:28.209114 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 12 19:11:28.209114 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Feb 12 19:11:28.307506 systemd-networkd[738]: eth0: Gained IPv6LL Feb 12 19:11:28.390242 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 12 19:11:28.510641 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Feb 12 19:11:28.512974 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 12 19:11:28.512974 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:11:28.512974 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Feb 12 19:11:28.559577 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 12 19:11:28.932378 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Feb 12 19:11:28.934606 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:11:28.934606 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:11:28.934606 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Feb 12 19:11:28.957958 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 12 19:11:29.915865 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Feb 12 19:11:29.915865 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:11:29.919808 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/install.sh" Feb 12 19:11:29.919808 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 19:11:29.919808 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:11:29.919808 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:11:29.919808 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:11:29.919808 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:11:29.919808 ignition[856]: INFO : files: op(b): [started] processing unit "containerd.service" Feb 12 19:11:29.919808 ignition[856]: INFO : files: op(b): op(c): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 19:11:29.919808 ignition[856]: INFO : files: op(b): op(c): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 19:11:29.919808 ignition[856]: INFO : files: op(b): [finished] processing unit "containerd.service" Feb 12 19:11:29.919808 ignition[856]: INFO : files: op(d): [started] processing unit "prepare-cni-plugins.service" Feb 12 19:11:29.919808 ignition[856]: INFO : files: op(d): op(e): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:11:29.919808 ignition[856]: INFO : files: op(d): op(e): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:11:29.919808 ignition[856]: INFO : files: op(d): [finished] processing unit "prepare-cni-plugins.service" Feb 12 19:11:29.919808 ignition[856]: INFO : files: op(f): [started] processing unit "prepare-critools.service" Feb 12 19:11:29.919808 ignition[856]: INFO : files: op(f): op(10): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:11:29.919808 ignition[856]: INFO : files: op(f): op(10): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:11:29.944783 ignition[856]: INFO : files: op(f): [finished] processing unit "prepare-critools.service" Feb 12 19:11:29.944783 ignition[856]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Feb 12 19:11:29.944783 ignition[856]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 19:11:29.944783 ignition[856]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 19:11:29.944783 ignition[856]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Feb 12 19:11:29.944783 ignition[856]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Feb 12 19:11:29.944783 ignition[856]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 19:11:29.958190 ignition[856]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 19:11:29.959326 ignition[856]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Feb 12 19:11:29.959326 ignition[856]: INFO : files: op(15): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:11:29.959326 ignition[856]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:11:29.959326 ignition[856]: INFO : files: op(16): [started] setting preset to enabled for "prepare-critools.service" Feb 12 19:11:29.959326 ignition[856]: INFO : files: op(16): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 19:11:29.959326 ignition[856]: INFO : files: createResultFile: createFiles: op(17): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:11:29.959326 ignition[856]: INFO : files: createResultFile: createFiles: op(17): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:11:29.959326 ignition[856]: INFO : files: files passed Feb 12 19:11:29.959326 ignition[856]: INFO : Ignition finished successfully Feb 12 19:11:29.978611 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 12 19:11:29.978636 kernel: audit: type=1130 audit(1707765089.960:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:29.978647 kernel: audit: type=1130 audit(1707765089.971:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:29.978665 kernel: audit: type=1131 audit(1707765089.971:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:29.978675 kernel: audit: type=1130 audit(1707765089.974:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:29.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:29.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:29.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:29.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:29.959637 systemd[1]: Finished ignition-files.service. Feb 12 19:11:29.962191 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 19:11:29.965144 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 19:11:29.982569 initrd-setup-root-after-ignition[882]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 12 19:11:29.965924 systemd[1]: Starting ignition-quench.service... Feb 12 19:11:29.984485 initrd-setup-root-after-ignition[885]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 19:11:29.969845 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 19:11:29.969946 systemd[1]: Finished ignition-quench.service. Feb 12 19:11:29.971811 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 19:11:29.974645 systemd[1]: Reached target ignition-complete.target. Feb 12 19:11:29.980125 systemd[1]: Starting initrd-parse-etc.service... Feb 12 19:11:29.992815 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 19:11:29.992922 systemd[1]: Finished initrd-parse-etc.service. Feb 12 19:11:29.998146 kernel: audit: type=1130 audit(1707765089.993:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:29.998168 kernel: audit: type=1131 audit(1707765089.993:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:29.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:29.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:29.994394 systemd[1]: Reached target initrd-fs.target. Feb 12 19:11:29.998771 systemd[1]: Reached target initrd.target. Feb 12 19:11:29.999915 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 19:11:30.000699 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 19:11:30.010971 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 19:11:30.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:30.012453 systemd[1]: Starting initrd-cleanup.service... Feb 12 19:11:30.014924 kernel: audit: type=1130 audit(1707765090.011:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:30.020792 systemd[1]: Stopped target network.target. Feb 12 19:11:30.021511 systemd[1]: Stopped target nss-lookup.target. Feb 12 19:11:30.022571 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 19:11:30.023667 systemd[1]: Stopped target timers.target. Feb 12 19:11:30.024720 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 19:11:30.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:30.024832 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 19:11:30.028634 kernel: audit: type=1131 audit(1707765090.025:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:30.028046 systemd[1]: Stopped target initrd.target. Feb 12 19:11:30.029184 systemd[1]: Stopped target basic.target. Feb 12 19:11:30.030141 systemd[1]: Stopped target ignition-complete.target. Feb 12 19:11:30.031158 systemd[1]: Stopped target ignition-diskful.target. Feb 12 19:11:30.032281 systemd[1]: Stopped target initrd-root-device.target. Feb 12 19:11:30.033415 systemd[1]: Stopped target remote-fs.target. Feb 12 19:11:30.034480 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 19:11:30.035503 systemd[1]: Stopped target sysinit.target. Feb 12 19:11:30.036505 systemd[1]: Stopped target local-fs.target. Feb 12 19:11:30.037509 systemd[1]: Stopped target local-fs-pre.target. Feb 12 19:11:30.038522 systemd[1]: Stopped target swap.target. Feb 12 19:11:30.039479 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 19:11:30.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:30.039588 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 19:11:30.044360 kernel: audit: type=1131 audit(1707765090.040:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:30.041497 systemd[1]: Stopped target cryptsetup.target. Feb 12 19:11:30.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:30.043948 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 19:11:30.048313 kernel: audit: type=1131 audit(1707765090.044:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:30.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:30.044094 systemd[1]: Stopped dracut-initqueue.service. Feb 12 19:11:30.045059 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 19:11:30.045197 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 19:11:30.047928 systemd[1]: Stopped target paths.target. Feb 12 19:11:30.048904 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 19:11:30.057370 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 19:11:30.058170 systemd[1]: Stopped target slices.target. Feb 12 19:11:30.059326 systemd[1]: Stopped target sockets.target. Feb 12 19:11:30.060279 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 19:11:30.060404 systemd[1]: Closed iscsid.socket. Feb 12 19:11:30.061506 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 19:11:30.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:30.061608 systemd[1]: Closed iscsiuio.socket. Feb 12 19:11:30.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:30.062535 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 19:11:30.062678 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 19:11:30.063602 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 19:11:30.063739 systemd[1]: Stopped ignition-files.service. Feb 12 19:11:30.065550 systemd[1]: Stopping ignition-mount.service... Feb 12 19:11:30.067089 systemd[1]: Stopping sysroot-boot.service... Feb 12 19:11:30.069449 systemd[1]: Stopping systemd-networkd.service... Feb 12 19:11:30.070566 systemd[1]: Stopping systemd-resolved.service... Feb 12 19:11:30.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:30.071334 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 19:11:30.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:30.075789 ignition[898]: INFO : Ignition 2.14.0 Feb 12 19:11:30.075789 ignition[898]: INFO : Stage: umount Feb 12 19:11:30.075789 ignition[898]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 19:11:30.075789 ignition[898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:11:30.075789 ignition[898]: INFO : umount: umount passed Feb 12 19:11:30.075789 ignition[898]: INFO : Ignition finished successfully Feb 12 19:11:30.071524 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 19:11:30.072898 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 19:11:30.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:30.073113 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 19:11:30.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:30.073975 systemd-networkd[738]: eth0: DHCPv6 lease lost Feb 12 19:11:30.080542 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 19:11:30.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:30.081308 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 19:11:30.086000 audit: BPF prog-id=6 op=UNLOAD Feb 12 19:11:30.087000 audit: BPF prog-id=9 op=UNLOAD Feb 12 19:11:30.081432 systemd[1]: Stopped systemd-resolved.service. Feb 12 19:11:30.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:30.083476 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 19:11:30.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:30.083567 systemd[1]: Stopped systemd-networkd.service. Feb 12 19:11:30.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:30.085797 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 19:11:30.085885 systemd[1]: Stopped ignition-mount.service. Feb 12 19:11:30.087101 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 19:11:30.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:30.087175 systemd[1]: Closed systemd-networkd.socket. Feb 12 19:11:30.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:30.087980 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 19:11:30.088022 systemd[1]: Stopped ignition-disks.service. Feb 12 19:11:30.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:30.089107 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 19:11:30.089146 systemd[1]: Stopped ignition-kargs.service. Feb 12 19:11:30.090104 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 19:11:30.090141 systemd[1]: Stopped ignition-setup.service. Feb 12 19:11:30.092704 systemd[1]: Stopping network-cleanup.service... Feb 12 19:11:30.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:30.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:30.094084 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 19:11:30.094146 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 19:11:30.095438 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:11:30.095481 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:11:30.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:30.097926 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 19:11:30.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:30.098006 systemd[1]: Stopped systemd-modules-load.service. Feb 12 19:11:30.100502 systemd[1]: Stopping systemd-udevd.service... Feb 12 19:11:30.104912 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 19:11:30.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:30.105437 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 19:11:30.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:30.105526 systemd[1]: Finished initrd-cleanup.service. Feb 12 19:11:30.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:30.109267 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 19:11:30.109453 systemd[1]: Stopped systemd-udevd.service. Feb 12 19:11:30.111413 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 19:11:30.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:30.111490 systemd[1]: Stopped network-cleanup.service. Feb 12 19:11:30.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:30.112468 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 19:11:30.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:30.112505 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 19:11:30.113678 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 19:11:30.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:30.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:30.113708 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 19:11:30.115577 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 19:11:30.115621 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 19:11:30.117220 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 19:11:30.117255 systemd[1]: Stopped dracut-cmdline.service. Feb 12 19:11:30.118627 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 19:11:30.118664 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 19:11:30.121215 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 19:11:30.122483 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 12 19:11:30.122541 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 12 19:11:30.124860 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 19:11:30.124911 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 19:11:30.125833 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 19:11:30.125890 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 19:11:30.128311 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 12 19:11:30.128757 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 19:11:30.128839 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 19:11:30.143509 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 19:11:30.143602 systemd[1]: Stopped sysroot-boot.service. Feb 12 19:11:30.144418 systemd[1]: Reached target initrd-switch-root.target. Feb 12 19:11:30.145578 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 19:11:30.145623 systemd[1]: Stopped initrd-setup-root.service. Feb 12 19:11:30.147423 systemd[1]: Starting initrd-switch-root.service... Feb 12 19:11:30.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:30.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:30.153397 systemd[1]: Switching root. Feb 12 19:11:30.154000 audit: BPF prog-id=5 op=UNLOAD Feb 12 19:11:30.156000 audit: BPF prog-id=4 op=UNLOAD Feb 12 19:11:30.156000 audit: BPF prog-id=3 op=UNLOAD Feb 12 19:11:30.157000 audit: BPF prog-id=8 op=UNLOAD Feb 12 19:11:30.157000 audit: BPF prog-id=7 op=UNLOAD Feb 12 19:11:30.168810 iscsid[745]: iscsid shutting down. Feb 12 19:11:30.169364 systemd-journald[289]: Received SIGTERM from PID 1 (systemd). Feb 12 19:11:30.169396 systemd-journald[289]: Journal stopped Feb 12 19:11:32.302428 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 19:11:32.302479 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 19:11:32.302495 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 19:11:32.302505 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 19:11:32.302517 kernel: SELinux: policy capability open_perms=1 Feb 12 19:11:32.302526 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 19:11:32.302536 kernel: SELinux: policy capability always_check_network=0 Feb 12 19:11:32.302545 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 19:11:32.302554 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 19:11:32.302563 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 19:11:32.302573 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 19:11:32.302587 systemd[1]: Successfully loaded SELinux policy in 35.305ms. Feb 12 19:11:32.302608 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.928ms. Feb 12 19:11:32.302622 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:11:32.302635 systemd[1]: Detected virtualization kvm. Feb 12 19:11:32.302648 systemd[1]: Detected architecture arm64. Feb 12 19:11:32.302661 systemd[1]: Detected first boot. Feb 12 19:11:32.302672 systemd[1]: Initializing machine ID from VM UUID. Feb 12 19:11:32.302682 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 19:11:32.302701 systemd[1]: Populated /etc with preset unit settings. Feb 12 19:11:32.302713 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:11:32.302725 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:11:32.302737 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:11:32.302748 systemd[1]: Queued start job for default target multi-user.target. Feb 12 19:11:32.302760 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 12 19:11:32.302772 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 19:11:32.302784 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 19:11:32.302800 systemd[1]: Created slice system-getty.slice. Feb 12 19:11:32.302811 systemd[1]: Created slice system-modprobe.slice. Feb 12 19:11:32.302825 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 19:11:32.302836 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 19:11:32.302847 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 19:11:32.302858 systemd[1]: Created slice user.slice. Feb 12 19:11:32.302876 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:11:32.302888 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 19:11:32.302899 systemd[1]: Set up automount boot.automount. Feb 12 19:11:32.302913 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 19:11:32.302924 systemd[1]: Reached target integritysetup.target. Feb 12 19:11:32.302935 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:11:32.302945 systemd[1]: Reached target remote-fs.target. Feb 12 19:11:32.302956 systemd[1]: Reached target slices.target. Feb 12 19:11:32.302967 systemd[1]: Reached target swap.target. Feb 12 19:11:32.302977 systemd[1]: Reached target torcx.target. Feb 12 19:11:32.302988 systemd[1]: Reached target veritysetup.target. Feb 12 19:11:32.303000 systemd[1]: Listening on systemd-coredump.socket. Feb 12 19:11:32.303012 systemd[1]: Listening on systemd-initctl.socket. Feb 12 19:11:32.303022 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 19:11:32.303032 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 19:11:32.303080 systemd[1]: Listening on systemd-journald.socket. Feb 12 19:11:32.303092 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:11:32.303103 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:11:32.303113 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:11:32.303124 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 19:11:32.303135 systemd[1]: Mounting dev-hugepages.mount... Feb 12 19:11:32.303150 systemd[1]: Mounting dev-mqueue.mount... Feb 12 19:11:32.303163 systemd[1]: Mounting media.mount... Feb 12 19:11:32.303174 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 19:11:32.303186 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 19:11:32.303197 systemd[1]: Mounting tmp.mount... Feb 12 19:11:32.303208 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 19:11:32.303219 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 19:11:32.303230 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:11:32.303240 systemd[1]: Starting modprobe@configfs.service... Feb 12 19:11:32.303253 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 19:11:32.303264 systemd[1]: Starting modprobe@drm.service... Feb 12 19:11:32.303274 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 19:11:32.303284 systemd[1]: Starting modprobe@fuse.service... Feb 12 19:11:32.303294 systemd[1]: Starting modprobe@loop.service... Feb 12 19:11:32.303306 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 19:11:32.303326 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 12 19:11:32.303337 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 12 19:11:32.303552 systemd[1]: Starting systemd-journald.service... Feb 12 19:11:32.303570 kernel: fuse: init (API version 7.34) Feb 12 19:11:32.303580 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:11:32.303590 systemd[1]: Starting systemd-network-generator.service... Feb 12 19:11:32.303600 systemd[1]: Starting systemd-remount-fs.service... Feb 12 19:11:32.303611 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:11:32.303630 systemd[1]: Mounted dev-hugepages.mount. Feb 12 19:11:32.303643 systemd[1]: Mounted dev-mqueue.mount. Feb 12 19:11:32.303654 systemd[1]: Mounted media.mount. Feb 12 19:11:32.303664 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 19:11:32.303676 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 19:11:32.303686 systemd[1]: Mounted tmp.mount. Feb 12 19:11:32.303697 kernel: loop: module loaded Feb 12 19:11:32.303708 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:11:32.303719 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 19:11:32.303730 systemd[1]: Finished modprobe@configfs.service. Feb 12 19:11:32.303744 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 19:11:32.303754 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 19:11:32.303767 systemd-journald[1035]: Journal started Feb 12 19:11:32.303811 systemd-journald[1035]: Runtime Journal (/run/log/journal/c7c32158855845fbb4382fc036a2ce68) is 6.0M, max 48.7M, 42.6M free. Feb 12 19:11:32.226000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:11:32.226000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 12 19:11:32.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:32.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:32.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:32.301000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 19:11:32.301000 audit[1035]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffe7956460 a2=4000 a3=1 items=0 ppid=1 pid=1035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:11:32.301000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 19:11:32.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:32.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:32.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:32.305329 systemd[1]: Started systemd-journald.service. Feb 12 19:11:32.305999 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 19:11:32.306510 systemd[1]: Finished modprobe@drm.service. Feb 12 19:11:32.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:32.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:32.307433 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 19:11:32.307616 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 19:11:32.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:32.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:32.308543 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 19:11:32.308732 systemd[1]: Finished modprobe@fuse.service. Feb 12 19:11:32.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:32.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:32.309717 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 19:11:32.309913 systemd[1]: Finished modprobe@loop.service. Feb 12 19:11:32.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:32.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:32.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:32.312158 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:11:32.313454 systemd[1]: Finished systemd-network-generator.service. Feb 12 19:11:32.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:32.315065 systemd[1]: Finished systemd-remount-fs.service. Feb 12 19:11:32.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:32.316242 systemd[1]: Reached target network-pre.target. Feb 12 19:11:32.318021 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 19:11:32.319837 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 19:11:32.320431 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 19:11:32.321956 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 19:11:32.325953 systemd[1]: Starting systemd-journal-flush.service... Feb 12 19:11:32.326769 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 19:11:32.328040 systemd[1]: Starting systemd-random-seed.service... Feb 12 19:11:32.328854 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 19:11:32.330203 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:11:32.335674 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 19:11:32.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:32.336621 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 19:11:32.337556 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 19:11:32.339579 systemd[1]: Starting systemd-sysusers.service... Feb 12 19:11:32.340769 systemd-journald[1035]: Time spent on flushing to /var/log/journal/c7c32158855845fbb4382fc036a2ce68 is 11.490ms for 941 entries. Feb 12 19:11:32.340769 systemd-journald[1035]: System Journal (/var/log/journal/c7c32158855845fbb4382fc036a2ce68) is 8.0M, max 195.6M, 187.6M free. Feb 12 19:11:32.361556 systemd-journald[1035]: Received client request to flush runtime journal. Feb 12 19:11:32.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:32.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:32.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:32.345090 systemd[1]: Finished systemd-random-seed.service. Feb 12 19:11:32.361956 udevadm[1083]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 12 19:11:32.345921 systemd[1]: Reached target first-boot-complete.target. Feb 12 19:11:32.351540 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:11:32.353664 systemd[1]: Starting systemd-udev-settle.service... Feb 12 19:11:32.354716 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:11:32.364400 systemd[1]: Finished systemd-journal-flush.service. Feb 12 19:11:32.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:32.367571 systemd[1]: Finished systemd-sysusers.service. Feb 12 19:11:32.369727 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:11:32.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:32.389035 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:11:32.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:32.709517 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 19:11:32.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:32.711482 systemd[1]: Starting systemd-udevd.service... Feb 12 19:11:32.730909 systemd-udevd[1092]: Using default interface naming scheme 'v252'. Feb 12 19:11:32.742278 systemd[1]: Started systemd-udevd.service. Feb 12 19:11:32.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:32.748079 systemd[1]: Starting systemd-networkd.service... Feb 12 19:11:32.755395 systemd[1]: Starting systemd-userdbd.service... Feb 12 19:11:32.761410 systemd[1]: Found device dev-ttyAMA0.device. Feb 12 19:11:32.797513 systemd[1]: Started systemd-userdbd.service. Feb 12 19:11:32.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:32.804891 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:11:32.856818 systemd[1]: Finished systemd-udev-settle.service. Feb 12 19:11:32.857068 systemd-networkd[1109]: lo: Link UP Feb 12 19:11:32.857077 systemd-networkd[1109]: lo: Gained carrier Feb 12 19:11:32.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:32.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:32.857428 systemd-networkd[1109]: Enumeration completed Feb 12 19:11:32.857521 systemd-networkd[1109]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:11:32.857686 systemd[1]: Started systemd-networkd.service. Feb 12 19:11:32.858752 systemd-networkd[1109]: eth0: Link UP Feb 12 19:11:32.858756 systemd-networkd[1109]: eth0: Gained carrier Feb 12 19:11:32.860234 systemd[1]: Starting lvm2-activation-early.service... Feb 12 19:11:32.869794 lvm[1126]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:11:32.879467 systemd-networkd[1109]: eth0: DHCPv4 address 10.0.0.30/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 12 19:11:32.896210 systemd[1]: Finished lvm2-activation-early.service. Feb 12 19:11:32.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:32.897087 systemd[1]: Reached target cryptsetup.target. Feb 12 19:11:32.898929 systemd[1]: Starting lvm2-activation.service... Feb 12 19:11:32.902445 lvm[1128]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:11:32.927376 systemd[1]: Finished lvm2-activation.service. Feb 12 19:11:32.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:32.928113 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:11:32.928768 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 19:11:32.928795 systemd[1]: Reached target local-fs.target. Feb 12 19:11:32.929361 systemd[1]: Reached target machines.target. Feb 12 19:11:32.931270 systemd[1]: Starting ldconfig.service... Feb 12 19:11:32.932132 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 19:11:32.932179 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:11:32.933198 systemd[1]: Starting systemd-boot-update.service... Feb 12 19:11:32.934971 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 19:11:32.937122 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 19:11:32.938018 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:11:32.938084 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:11:32.939174 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 19:11:32.940446 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1131 (bootctl) Feb 12 19:11:32.941895 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 19:11:32.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:32.945892 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 19:11:32.953561 systemd-tmpfiles[1134]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 19:11:32.954818 systemd-tmpfiles[1134]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 19:11:32.957276 systemd-tmpfiles[1134]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 19:11:33.017787 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 19:11:33.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:33.024924 systemd-fsck[1140]: fsck.fat 4.2 (2021-01-31) Feb 12 19:11:33.024924 systemd-fsck[1140]: /dev/vda1: 236 files, 113719/258078 clusters Feb 12 19:11:33.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:33.026843 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 19:11:33.112457 ldconfig[1130]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 19:11:33.117202 systemd[1]: Finished ldconfig.service. Feb 12 19:11:33.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:33.288603 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 19:11:33.290137 systemd[1]: Mounting boot.mount... Feb 12 19:11:33.296778 systemd[1]: Mounted boot.mount. Feb 12 19:11:33.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:33.303846 systemd[1]: Finished systemd-boot-update.service. Feb 12 19:11:33.357199 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 19:11:33.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:33.359290 systemd[1]: Starting audit-rules.service... Feb 12 19:11:33.361030 systemd[1]: Starting clean-ca-certificates.service... Feb 12 19:11:33.362736 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 19:11:33.365008 systemd[1]: Starting systemd-resolved.service... Feb 12 19:11:33.367214 systemd[1]: Starting systemd-timesyncd.service... Feb 12 19:11:33.369437 systemd[1]: Starting systemd-update-utmp.service... Feb 12 19:11:33.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:33.371731 systemd[1]: Finished clean-ca-certificates.service. Feb 12 19:11:33.372901 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 19:11:33.378291 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 19:11:33.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:33.380393 systemd[1]: Starting systemd-update-done.service... Feb 12 19:11:33.382000 audit[1162]: SYSTEM_BOOT pid=1162 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 19:11:33.384515 systemd[1]: Finished systemd-update-utmp.service. Feb 12 19:11:33.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:33.390418 systemd[1]: Finished systemd-update-done.service. Feb 12 19:11:33.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:11:33.429000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 19:11:33.429000 audit[1175]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcc1130a0 a2=420 a3=0 items=0 ppid=1150 pid=1175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:11:33.429000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 19:11:33.430173 augenrules[1175]: No rules Feb 12 19:11:33.430495 systemd[1]: Started systemd-timesyncd.service. Feb 12 19:11:33.431464 systemd-timesyncd[1158]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 12 19:11:33.431747 systemd-timesyncd[1158]: Initial clock synchronization to Mon 2024-02-12 19:11:33.757813 UTC. Feb 12 19:11:33.431859 systemd[1]: Finished audit-rules.service. Feb 12 19:11:33.432608 systemd[1]: Reached target time-set.target. Feb 12 19:11:33.433393 systemd-resolved[1155]: Positive Trust Anchors: Feb 12 19:11:33.433405 systemd-resolved[1155]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:11:33.433432 systemd-resolved[1155]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:11:33.455657 systemd-resolved[1155]: Defaulting to hostname 'linux'. Feb 12 19:11:33.457008 systemd[1]: Started systemd-resolved.service. Feb 12 19:11:33.457753 systemd[1]: Reached target network.target. Feb 12 19:11:33.458390 systemd[1]: Reached target nss-lookup.target. Feb 12 19:11:33.459030 systemd[1]: Reached target sysinit.target. Feb 12 19:11:33.459724 systemd[1]: Started motdgen.path. Feb 12 19:11:33.460327 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 19:11:33.461340 systemd[1]: Started logrotate.timer. Feb 12 19:11:33.462023 systemd[1]: Started mdadm.timer. Feb 12 19:11:33.462616 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 19:11:33.463300 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 19:11:33.463351 systemd[1]: Reached target paths.target. Feb 12 19:11:33.463940 systemd[1]: Reached target timers.target. Feb 12 19:11:33.465034 systemd[1]: Listening on dbus.socket. Feb 12 19:11:33.466707 systemd[1]: Starting docker.socket... Feb 12 19:11:33.468439 systemd[1]: Listening on sshd.socket. Feb 12 19:11:33.469055 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:11:33.469398 systemd[1]: Listening on docker.socket. Feb 12 19:11:33.469959 systemd[1]: Reached target sockets.target. Feb 12 19:11:33.470503 systemd[1]: Reached target basic.target. Feb 12 19:11:33.471151 systemd[1]: System is tainted: cgroupsv1 Feb 12 19:11:33.471193 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:11:33.471212 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:11:33.472211 systemd[1]: Starting containerd.service... Feb 12 19:11:33.473993 systemd[1]: Starting dbus.service... Feb 12 19:11:33.475628 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 19:11:33.477640 systemd[1]: Starting extend-filesystems.service... Feb 12 19:11:33.478470 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 19:11:33.479903 systemd[1]: Starting motdgen.service... Feb 12 19:11:33.481718 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 19:11:33.484059 systemd[1]: Starting prepare-critools.service... Feb 12 19:11:33.486305 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 19:11:33.488189 systemd[1]: Starting sshd-keygen.service... Feb 12 19:11:33.489949 jq[1187]: false Feb 12 19:11:33.490997 systemd[1]: Starting systemd-logind.service... Feb 12 19:11:33.491770 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:11:33.491837 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 19:11:33.493612 systemd[1]: Starting update-engine.service... Feb 12 19:11:33.497156 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 19:11:33.502181 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 19:11:33.507321 jq[1203]: true Feb 12 19:11:33.502483 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 19:11:33.517957 tar[1209]: ./ Feb 12 19:11:33.517957 tar[1209]: ./macvlan Feb 12 19:11:33.510400 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 19:11:33.518298 jq[1211]: true Feb 12 19:11:33.510629 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 19:11:33.518887 tar[1210]: crictl Feb 12 19:11:33.526683 dbus-daemon[1186]: [system] SELinux support is enabled Feb 12 19:11:33.526860 systemd[1]: Started dbus.service. Feb 12 19:11:33.529232 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 19:11:33.529297 systemd[1]: Reached target system-config.target. Feb 12 19:11:33.530142 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 19:11:33.530166 systemd[1]: Reached target user-config.target. Feb 12 19:11:33.534498 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 19:11:33.553516 systemd[1]: Finished motdgen.service. Feb 12 19:11:33.559166 extend-filesystems[1188]: Found vda Feb 12 19:11:33.559166 extend-filesystems[1188]: Found vda1 Feb 12 19:11:33.559166 extend-filesystems[1188]: Found vda2 Feb 12 19:11:33.559166 extend-filesystems[1188]: Found vda3 Feb 12 19:11:33.559166 extend-filesystems[1188]: Found usr Feb 12 19:11:33.559166 extend-filesystems[1188]: Found vda4 Feb 12 19:11:33.559166 extend-filesystems[1188]: Found vda6 Feb 12 19:11:33.559166 extend-filesystems[1188]: Found vda7 Feb 12 19:11:33.559166 extend-filesystems[1188]: Found vda9 Feb 12 19:11:33.559166 extend-filesystems[1188]: Checking size of /dev/vda9 Feb 12 19:11:33.578482 extend-filesystems[1188]: Resized partition /dev/vda9 Feb 12 19:11:33.581589 extend-filesystems[1249]: resize2fs 1.46.5 (30-Dec-2021) Feb 12 19:11:33.584483 bash[1239]: Updated "/home/core/.ssh/authorized_keys" Feb 12 19:11:33.584098 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 19:11:33.592630 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 12 19:11:33.592722 tar[1209]: ./static Feb 12 19:11:33.627647 tar[1209]: ./vlan Feb 12 19:11:33.628222 systemd-logind[1198]: Watching system buttons on /dev/input/event0 (Power Button) Feb 12 19:11:33.634350 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 12 19:11:33.635388 systemd-logind[1198]: New seat seat0. Feb 12 19:11:33.643392 systemd[1]: Started systemd-logind.service. Feb 12 19:11:33.645920 update_engine[1201]: I0212 19:11:33.639189 1201 main.cc:92] Flatcar Update Engine starting Feb 12 19:11:33.646351 extend-filesystems[1249]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 12 19:11:33.646351 extend-filesystems[1249]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 12 19:11:33.646351 extend-filesystems[1249]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 12 19:11:33.650444 extend-filesystems[1188]: Resized filesystem in /dev/vda9 Feb 12 19:11:33.651221 update_engine[1201]: I0212 19:11:33.647636 1201 update_check_scheduler.cc:74] Next update check in 10m16s Feb 12 19:11:33.646987 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 19:11:33.647233 systemd[1]: Finished extend-filesystems.service. Feb 12 19:11:33.648444 systemd[1]: Started update-engine.service. Feb 12 19:11:33.651521 systemd[1]: Started locksmithd.service. Feb 12 19:11:33.677239 env[1213]: time="2024-02-12T19:11:33.677184280Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 19:11:33.678679 tar[1209]: ./portmap Feb 12 19:11:33.698484 env[1213]: time="2024-02-12T19:11:33.698435640Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 19:11:33.698625 env[1213]: time="2024-02-12T19:11:33.698602360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:11:33.704725 env[1213]: time="2024-02-12T19:11:33.703509880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:11:33.704725 env[1213]: time="2024-02-12T19:11:33.703551760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:11:33.704725 env[1213]: time="2024-02-12T19:11:33.703809840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:11:33.704725 env[1213]: time="2024-02-12T19:11:33.703828720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 19:11:33.704725 env[1213]: time="2024-02-12T19:11:33.703842280Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 19:11:33.704725 env[1213]: time="2024-02-12T19:11:33.703852520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 19:11:33.704725 env[1213]: time="2024-02-12T19:11:33.703939880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:11:33.704725 env[1213]: time="2024-02-12T19:11:33.704441640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:11:33.704725 env[1213]: time="2024-02-12T19:11:33.704591840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:11:33.704725 env[1213]: time="2024-02-12T19:11:33.704608360Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 19:11:33.705024 env[1213]: time="2024-02-12T19:11:33.704679080Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 19:11:33.705024 env[1213]: time="2024-02-12T19:11:33.704694080Z" level=info msg="metadata content store policy set" policy=shared Feb 12 19:11:33.708911 tar[1209]: ./host-local Feb 12 19:11:33.709425 env[1213]: time="2024-02-12T19:11:33.709341120Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 19:11:33.709425 env[1213]: time="2024-02-12T19:11:33.709380720Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 19:11:33.709425 env[1213]: time="2024-02-12T19:11:33.709394240Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 19:11:33.709581 env[1213]: time="2024-02-12T19:11:33.709562320Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 19:11:33.709702 env[1213]: time="2024-02-12T19:11:33.709688240Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 19:11:33.709766 env[1213]: time="2024-02-12T19:11:33.709752160Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 19:11:33.709837 env[1213]: time="2024-02-12T19:11:33.709822760Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 19:11:33.710255 env[1213]: time="2024-02-12T19:11:33.710227840Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 19:11:33.710366 env[1213]: time="2024-02-12T19:11:33.710347560Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 19:11:33.710435 env[1213]: time="2024-02-12T19:11:33.710420000Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 19:11:33.710495 env[1213]: time="2024-02-12T19:11:33.710480600Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 19:11:33.710554 env[1213]: time="2024-02-12T19:11:33.710540320Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 19:11:33.710737 env[1213]: time="2024-02-12T19:11:33.710718680Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 19:11:33.710931 env[1213]: time="2024-02-12T19:11:33.710909960Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 19:11:33.711331 env[1213]: time="2024-02-12T19:11:33.711294000Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 19:11:33.711421 env[1213]: time="2024-02-12T19:11:33.711403840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 19:11:33.711481 env[1213]: time="2024-02-12T19:11:33.711466760Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 19:11:33.711792 env[1213]: time="2024-02-12T19:11:33.711776440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 19:11:33.711856 env[1213]: time="2024-02-12T19:11:33.711841680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 19:11:33.711949 env[1213]: time="2024-02-12T19:11:33.711933200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 19:11:33.712106 env[1213]: time="2024-02-12T19:11:33.712087680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 19:11:33.712178 env[1213]: time="2024-02-12T19:11:33.712162920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 19:11:33.712239 env[1213]: time="2024-02-12T19:11:33.712224600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 19:11:33.712301 env[1213]: time="2024-02-12T19:11:33.712285720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 19:11:33.712396 env[1213]: time="2024-02-12T19:11:33.712382080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 19:11:33.712477 env[1213]: time="2024-02-12T19:11:33.712462320Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 19:11:33.712670 env[1213]: time="2024-02-12T19:11:33.712650400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 19:11:33.712765 env[1213]: time="2024-02-12T19:11:33.712750880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 19:11:33.712828 env[1213]: time="2024-02-12T19:11:33.712813280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 19:11:33.712909 env[1213]: time="2024-02-12T19:11:33.712893600Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 19:11:33.712974 env[1213]: time="2024-02-12T19:11:33.712958040Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 19:11:33.713031 env[1213]: time="2024-02-12T19:11:33.713016800Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 19:11:33.713096 env[1213]: time="2024-02-12T19:11:33.713081080Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 19:11:33.713176 env[1213]: time="2024-02-12T19:11:33.713160320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 19:11:33.713502 env[1213]: time="2024-02-12T19:11:33.713444760Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 19:11:33.720557 env[1213]: time="2024-02-12T19:11:33.714136760Z" level=info msg="Connect containerd service" Feb 12 19:11:33.720557 env[1213]: time="2024-02-12T19:11:33.714188200Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 19:11:33.721178 env[1213]: time="2024-02-12T19:11:33.721144160Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:11:33.726027 env[1213]: time="2024-02-12T19:11:33.725995720Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 19:11:33.726329 env[1213]: time="2024-02-12T19:11:33.726236320Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 19:11:33.726549 env[1213]: time="2024-02-12T19:11:33.726461720Z" level=info msg="containerd successfully booted in 0.049948s" Feb 12 19:11:33.726596 systemd[1]: Started containerd.service. Feb 12 19:11:33.727658 env[1213]: time="2024-02-12T19:11:33.727609960Z" level=info msg="Start subscribing containerd event" Feb 12 19:11:33.727717 env[1213]: time="2024-02-12T19:11:33.727680800Z" level=info msg="Start recovering state" Feb 12 19:11:33.727809 env[1213]: time="2024-02-12T19:11:33.727759240Z" level=info msg="Start event monitor" Feb 12 19:11:33.727809 env[1213]: time="2024-02-12T19:11:33.727785440Z" level=info msg="Start snapshots syncer" Feb 12 19:11:33.727809 env[1213]: time="2024-02-12T19:11:33.727800200Z" level=info msg="Start cni network conf syncer for default" Feb 12 19:11:33.727902 env[1213]: time="2024-02-12T19:11:33.727808280Z" level=info msg="Start streaming server" Feb 12 19:11:33.739632 tar[1209]: ./vrf Feb 12 19:11:33.770184 tar[1209]: ./bridge Feb 12 19:11:33.800345 tar[1209]: ./tuning Feb 12 19:11:33.801117 locksmithd[1254]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 19:11:33.827771 tar[1209]: ./firewall Feb 12 19:11:33.871977 tar[1209]: ./host-device Feb 12 19:11:33.903383 tar[1209]: ./sbr Feb 12 19:11:33.931710 tar[1209]: ./loopback Feb 12 19:11:33.959578 tar[1209]: ./dhcp Feb 12 19:11:33.986607 systemd[1]: Finished prepare-critools.service. Feb 12 19:11:34.040343 tar[1209]: ./ptp Feb 12 19:11:34.074133 tar[1209]: ./ipvlan Feb 12 19:11:34.104153 tar[1209]: ./bandwidth Feb 12 19:11:34.151633 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 19:11:34.708607 systemd-networkd[1109]: eth0: Gained IPv6LL Feb 12 19:11:35.544651 sshd_keygen[1220]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 19:11:35.562511 systemd[1]: Finished sshd-keygen.service. Feb 12 19:11:35.564797 systemd[1]: Starting issuegen.service... Feb 12 19:11:35.569750 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 19:11:35.569961 systemd[1]: Finished issuegen.service. Feb 12 19:11:35.572201 systemd[1]: Starting systemd-user-sessions.service... Feb 12 19:11:35.577837 systemd[1]: Finished systemd-user-sessions.service. Feb 12 19:11:35.580007 systemd[1]: Started getty@tty1.service. Feb 12 19:11:35.581936 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 12 19:11:35.583071 systemd[1]: Reached target getty.target. Feb 12 19:11:35.583801 systemd[1]: Reached target multi-user.target. Feb 12 19:11:35.585685 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 19:11:35.592597 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 19:11:35.592831 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 19:11:35.593827 systemd[1]: Startup finished in 6.250s (kernel) + 5.355s (userspace) = 11.606s. Feb 12 19:11:37.524789 systemd[1]: Created slice system-sshd.slice. Feb 12 19:11:37.525913 systemd[1]: Started sshd@0-10.0.0.30:22-10.0.0.1:52280.service. Feb 12 19:11:37.569999 sshd[1291]: Accepted publickey for core from 10.0.0.1 port 52280 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:11:37.572675 sshd[1291]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:11:37.581646 systemd-logind[1198]: New session 1 of user core. Feb 12 19:11:37.582511 systemd[1]: Created slice user-500.slice. Feb 12 19:11:37.583480 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 19:11:37.591506 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 19:11:37.592689 systemd[1]: Starting user@500.service... Feb 12 19:11:37.595305 (systemd)[1296]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:11:37.655211 systemd[1296]: Queued start job for default target default.target. Feb 12 19:11:37.655463 systemd[1296]: Reached target paths.target. Feb 12 19:11:37.655478 systemd[1296]: Reached target sockets.target. Feb 12 19:11:37.655489 systemd[1296]: Reached target timers.target. Feb 12 19:11:37.655512 systemd[1296]: Reached target basic.target. Feb 12 19:11:37.655637 systemd[1]: Started user@500.service. Feb 12 19:11:37.656304 systemd[1296]: Reached target default.target. Feb 12 19:11:37.656384 systemd[1296]: Startup finished in 55ms. Feb 12 19:11:37.656534 systemd[1]: Started session-1.scope. Feb 12 19:11:37.707385 systemd[1]: Started sshd@1-10.0.0.30:22-10.0.0.1:52290.service. Feb 12 19:11:37.750262 sshd[1305]: Accepted publickey for core from 10.0.0.1 port 52290 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:11:37.751581 sshd[1305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:11:37.755368 systemd-logind[1198]: New session 2 of user core. Feb 12 19:11:37.756145 systemd[1]: Started session-2.scope. Feb 12 19:11:37.812174 sshd[1305]: pam_unix(sshd:session): session closed for user core Feb 12 19:11:37.814610 systemd[1]: Started sshd@2-10.0.0.30:22-10.0.0.1:52302.service. Feb 12 19:11:37.815090 systemd[1]: sshd@1-10.0.0.30:22-10.0.0.1:52290.service: Deactivated successfully. Feb 12 19:11:37.815953 systemd-logind[1198]: Session 2 logged out. Waiting for processes to exit. Feb 12 19:11:37.816036 systemd[1]: session-2.scope: Deactivated successfully. Feb 12 19:11:37.817061 systemd-logind[1198]: Removed session 2. Feb 12 19:11:37.854118 sshd[1310]: Accepted publickey for core from 10.0.0.1 port 52302 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:11:37.855593 sshd[1310]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:11:37.858882 systemd-logind[1198]: New session 3 of user core. Feb 12 19:11:37.859668 systemd[1]: Started session-3.scope. Feb 12 19:11:37.910667 sshd[1310]: pam_unix(sshd:session): session closed for user core Feb 12 19:11:37.912969 systemd[1]: Started sshd@3-10.0.0.30:22-10.0.0.1:52306.service. Feb 12 19:11:37.913504 systemd[1]: sshd@2-10.0.0.30:22-10.0.0.1:52302.service: Deactivated successfully. Feb 12 19:11:37.914400 systemd-logind[1198]: Session 3 logged out. Waiting for processes to exit. Feb 12 19:11:37.914488 systemd[1]: session-3.scope: Deactivated successfully. Feb 12 19:11:37.915533 systemd-logind[1198]: Removed session 3. Feb 12 19:11:37.952355 sshd[1317]: Accepted publickey for core from 10.0.0.1 port 52306 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:11:37.953647 sshd[1317]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:11:37.956892 systemd-logind[1198]: New session 4 of user core. Feb 12 19:11:37.957722 systemd[1]: Started session-4.scope. Feb 12 19:11:38.012753 sshd[1317]: pam_unix(sshd:session): session closed for user core Feb 12 19:11:38.014917 systemd[1]: Started sshd@4-10.0.0.30:22-10.0.0.1:52316.service. Feb 12 19:11:38.015364 systemd[1]: sshd@3-10.0.0.30:22-10.0.0.1:52306.service: Deactivated successfully. Feb 12 19:11:38.016393 systemd-logind[1198]: Session 4 logged out. Waiting for processes to exit. Feb 12 19:11:38.016424 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 19:11:38.017114 systemd-logind[1198]: Removed session 4. Feb 12 19:11:38.055712 sshd[1324]: Accepted publickey for core from 10.0.0.1 port 52316 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:11:38.057201 sshd[1324]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:11:38.060552 systemd-logind[1198]: New session 5 of user core. Feb 12 19:11:38.061362 systemd[1]: Started session-5.scope. Feb 12 19:11:38.120058 sudo[1330]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 19:11:38.120684 sudo[1330]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:11:38.734938 systemd[1]: Reloading. Feb 12 19:11:38.770665 /usr/lib/systemd/system-generators/torcx-generator[1360]: time="2024-02-12T19:11:38Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:11:38.770697 /usr/lib/systemd/system-generators/torcx-generator[1360]: time="2024-02-12T19:11:38Z" level=info msg="torcx already run" Feb 12 19:11:38.843120 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:11:38.843141 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:11:38.861211 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:11:38.917674 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 19:11:38.924100 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 19:11:38.924636 systemd[1]: Reached target network-online.target. Feb 12 19:11:38.926450 systemd[1]: Started kubelet.service. Feb 12 19:11:38.938177 systemd[1]: Starting coreos-metadata.service... Feb 12 19:11:38.946640 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 12 19:11:38.946908 systemd[1]: Finished coreos-metadata.service. Feb 12 19:11:39.213242 kubelet[1405]: E0212 19:11:39.213151 1405 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 19:11:39.215246 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:11:39.215420 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:11:39.313174 systemd[1]: Stopped kubelet.service. Feb 12 19:11:39.329001 systemd[1]: Reloading. Feb 12 19:11:39.372864 /usr/lib/systemd/system-generators/torcx-generator[1476]: time="2024-02-12T19:11:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:11:39.376656 /usr/lib/systemd/system-generators/torcx-generator[1476]: time="2024-02-12T19:11:39Z" level=info msg="torcx already run" Feb 12 19:11:39.432654 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:11:39.432673 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:11:39.450399 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:11:39.511493 systemd[1]: Started kubelet.service. Feb 12 19:11:39.554921 kubelet[1520]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:11:39.554921 kubelet[1520]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:11:39.555288 kubelet[1520]: I0212 19:11:39.555065 1520 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:11:39.557406 kubelet[1520]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:11:39.557406 kubelet[1520]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:11:40.348602 kubelet[1520]: I0212 19:11:40.348567 1520 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 19:11:40.348602 kubelet[1520]: I0212 19:11:40.348597 1520 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:11:40.348856 kubelet[1520]: I0212 19:11:40.348834 1520 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 19:11:40.356771 kubelet[1520]: I0212 19:11:40.356736 1520 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:11:40.360741 kubelet[1520]: W0212 19:11:40.360711 1520 machine.go:65] Cannot read vendor id correctly, set empty. Feb 12 19:11:40.361930 kubelet[1520]: I0212 19:11:40.361910 1520 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:11:40.362739 kubelet[1520]: I0212 19:11:40.362719 1520 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:11:40.362800 kubelet[1520]: I0212 19:11:40.362789 1520 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 19:11:40.362898 kubelet[1520]: I0212 19:11:40.362807 1520 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 19:11:40.362898 kubelet[1520]: I0212 19:11:40.362821 1520 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 19:11:40.363161 kubelet[1520]: I0212 19:11:40.363133 1520 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:11:40.371919 kubelet[1520]: I0212 19:11:40.371859 1520 kubelet.go:398] "Attempting to sync node with API server" Feb 12 19:11:40.371919 kubelet[1520]: I0212 19:11:40.371889 1520 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:11:40.372073 kubelet[1520]: I0212 19:11:40.372047 1520 kubelet.go:297] "Adding apiserver pod source" Feb 12 19:11:40.372073 kubelet[1520]: I0212 19:11:40.372060 1520 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:11:40.372955 kubelet[1520]: E0212 19:11:40.372817 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:11:40.372955 kubelet[1520]: E0212 19:11:40.372860 1520 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:11:40.373478 kubelet[1520]: I0212 19:11:40.373447 1520 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:11:40.375681 kubelet[1520]: W0212 19:11:40.375650 1520 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 19:11:40.376556 kubelet[1520]: I0212 19:11:40.376534 1520 server.go:1186] "Started kubelet" Feb 12 19:11:40.377478 kubelet[1520]: I0212 19:11:40.377461 1520 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:11:40.377913 kubelet[1520]: E0212 19:11:40.377664 1520 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:11:40.377913 kubelet[1520]: E0212 19:11:40.377712 1520 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:11:40.378802 kubelet[1520]: I0212 19:11:40.378768 1520 server.go:451] "Adding debug handlers to kubelet server" Feb 12 19:11:40.380214 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 19:11:40.380953 kubelet[1520]: I0212 19:11:40.380818 1520 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:11:40.382472 kubelet[1520]: I0212 19:11:40.382450 1520 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 19:11:40.382637 kubelet[1520]: I0212 19:11:40.382622 1520 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 19:11:40.384243 kubelet[1520]: E0212 19:11:40.383871 1520 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "10.0.0.30" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:11:40.384243 kubelet[1520]: W0212 19:11:40.384062 1520 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:11:40.384243 kubelet[1520]: E0212 19:11:40.384100 1520 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:11:40.384243 kubelet[1520]: W0212 19:11:40.384181 1520 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.30" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:11:40.384243 kubelet[1520]: E0212 19:11:40.384192 1520 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.30" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:11:40.385766 kubelet[1520]: E0212 19:11:40.382114 1520 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.30.17b3334f87c0393f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.30", UID:"10.0.0.30", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.30"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 11, 40, 376504639, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 11, 40, 376504639, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:11:40.385766 kubelet[1520]: W0212 19:11:40.385649 1520 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:11:40.385766 kubelet[1520]: E0212 19:11:40.385668 1520 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:11:40.386524 kubelet[1520]: E0212 19:11:40.386427 1520 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.30.17b3334f87d26b8c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.30", UID:"10.0.0.30", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.30"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 11, 40, 377697164, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 11, 40, 377697164, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:11:40.416383 kubelet[1520]: I0212 19:11:40.416358 1520 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:11:40.416535 kubelet[1520]: I0212 19:11:40.416523 1520 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:11:40.416596 kubelet[1520]: I0212 19:11:40.416586 1520 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:11:40.417218 kubelet[1520]: E0212 19:11:40.417129 1520 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.30.17b3334f8a148b1a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.30", UID:"10.0.0.30", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.30 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.30"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 11, 40, 415585050, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 11, 40, 415585050, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:11:40.418072 kubelet[1520]: E0212 19:11:40.418008 1520 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.30.17b3334f8a149eac", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.30", UID:"10.0.0.30", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.30 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.30"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 11, 40, 415590060, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 11, 40, 415590060, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:11:40.418974 kubelet[1520]: E0212 19:11:40.418905 1520 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.30.17b3334f8a14ada1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.30", UID:"10.0.0.30", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.30 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.30"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 11, 40, 415593889, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 11, 40, 415593889, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:11:40.419805 kubelet[1520]: I0212 19:11:40.419784 1520 policy_none.go:49] "None policy: Start" Feb 12 19:11:40.420523 kubelet[1520]: I0212 19:11:40.420505 1520 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:11:40.420580 kubelet[1520]: I0212 19:11:40.420532 1520 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:11:40.426763 kubelet[1520]: I0212 19:11:40.426722 1520 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:11:40.426946 kubelet[1520]: I0212 19:11:40.426922 1520 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:11:40.428299 kubelet[1520]: E0212 19:11:40.428269 1520 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.30\" not found" Feb 12 19:11:40.428633 kubelet[1520]: E0212 19:11:40.428554 1520 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.30.17b3334f8aceefee", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.30", UID:"10.0.0.30", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.30"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 11, 40, 427800558, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 11, 40, 427800558, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:11:40.483284 kubelet[1520]: I0212 19:11:40.483255 1520 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.30" Feb 12 19:11:40.484897 kubelet[1520]: E0212 19:11:40.484864 1520 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.30" Feb 12 19:11:40.485012 kubelet[1520]: E0212 19:11:40.484838 1520 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.30.17b3334f8a148b1a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.30", UID:"10.0.0.30", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.30 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.30"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 11, 40, 415585050, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 11, 40, 483203369, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.30.17b3334f8a148b1a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:11:40.489112 kubelet[1520]: E0212 19:11:40.489019 1520 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.30.17b3334f8a149eac", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.30", UID:"10.0.0.30", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.30 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.30"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 11, 40, 415590060, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 11, 40, 483217503, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.30.17b3334f8a149eac" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:11:40.491580 kubelet[1520]: E0212 19:11:40.491504 1520 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.30.17b3334f8a14ada1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.30", UID:"10.0.0.30", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.30 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.30"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 11, 40, 415593889, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 11, 40, 483222432, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.30.17b3334f8a14ada1" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:11:40.524686 kubelet[1520]: I0212 19:11:40.524654 1520 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 19:11:40.544175 kubelet[1520]: I0212 19:11:40.544149 1520 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 19:11:40.544175 kubelet[1520]: I0212 19:11:40.544173 1520 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 19:11:40.544335 kubelet[1520]: I0212 19:11:40.544189 1520 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 19:11:40.544335 kubelet[1520]: E0212 19:11:40.544241 1520 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 12 19:11:40.545623 kubelet[1520]: W0212 19:11:40.545600 1520 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:11:40.545623 kubelet[1520]: E0212 19:11:40.545626 1520 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:11:40.585122 kubelet[1520]: E0212 19:11:40.585067 1520 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "10.0.0.30" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:11:40.685804 kubelet[1520]: I0212 19:11:40.685780 1520 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.30" Feb 12 19:11:40.687131 kubelet[1520]: E0212 19:11:40.687100 1520 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.30" Feb 12 19:11:40.687131 kubelet[1520]: E0212 19:11:40.687057 1520 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.30.17b3334f8a148b1a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.30", UID:"10.0.0.30", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.30 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.30"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 11, 40, 415585050, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 11, 40, 685735033, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.30.17b3334f8a148b1a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:11:40.687999 kubelet[1520]: E0212 19:11:40.687937 1520 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.30.17b3334f8a149eac", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.30", UID:"10.0.0.30", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.30 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.30"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 11, 40, 415590060, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 11, 40, 685749656, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.30.17b3334f8a149eac" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:11:40.779396 kubelet[1520]: E0212 19:11:40.779244 1520 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.30.17b3334f8a14ada1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.30", UID:"10.0.0.30", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.30 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.30"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 11, 40, 415593889, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 11, 40, 685752548, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.30.17b3334f8a14ada1" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:11:40.987116 kubelet[1520]: E0212 19:11:40.987009 1520 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "10.0.0.30" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:11:41.088687 kubelet[1520]: I0212 19:11:41.088641 1520 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.30" Feb 12 19:11:41.090374 kubelet[1520]: E0212 19:11:41.090343 1520 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.30" Feb 12 19:11:41.090516 kubelet[1520]: E0212 19:11:41.090325 1520 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.30.17b3334f8a148b1a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.30", UID:"10.0.0.30", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.30 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.30"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 11, 40, 415585050, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 11, 41, 88592882, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.30.17b3334f8a148b1a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:11:41.179312 kubelet[1520]: E0212 19:11:41.179216 1520 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.30.17b3334f8a149eac", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.30", UID:"10.0.0.30", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.30 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.30"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 11, 40, 415590060, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 11, 41, 88605034, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.30.17b3334f8a149eac" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:11:41.372987 kubelet[1520]: E0212 19:11:41.372891 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:11:41.379528 kubelet[1520]: E0212 19:11:41.379440 1520 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.30.17b3334f8a14ada1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.30", UID:"10.0.0.30", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.30 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.30"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 11, 40, 415593889, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 11, 41, 88608610, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.30.17b3334f8a14ada1" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:11:41.419451 kubelet[1520]: W0212 19:11:41.419422 1520 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.30" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:11:41.419451 kubelet[1520]: E0212 19:11:41.419458 1520 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.30" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:11:41.549069 kubelet[1520]: W0212 19:11:41.549041 1520 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:11:41.549069 kubelet[1520]: E0212 19:11:41.549073 1520 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:11:41.563441 kubelet[1520]: W0212 19:11:41.563421 1520 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:11:41.563441 kubelet[1520]: E0212 19:11:41.563443 1520 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:11:41.597479 kubelet[1520]: W0212 19:11:41.597457 1520 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:11:41.597479 kubelet[1520]: E0212 19:11:41.597482 1520 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:11:41.789063 kubelet[1520]: E0212 19:11:41.789028 1520 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "10.0.0.30" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:11:41.891281 kubelet[1520]: I0212 19:11:41.891255 1520 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.30" Feb 12 19:11:41.894716 kubelet[1520]: E0212 19:11:41.894622 1520 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.30.17b3334f8a148b1a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.30", UID:"10.0.0.30", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.30 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.30"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 11, 40, 415585050, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 11, 41, 891210688, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.30.17b3334f8a148b1a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:11:41.895066 kubelet[1520]: E0212 19:11:41.895041 1520 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.30" Feb 12 19:11:41.896163 kubelet[1520]: E0212 19:11:41.896091 1520 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.30.17b3334f8a149eac", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.30", UID:"10.0.0.30", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.30 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.30"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 11, 40, 415590060, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 11, 41, 891222717, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.30.17b3334f8a149eac" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:11:41.978896 kubelet[1520]: E0212 19:11:41.978805 1520 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.30.17b3334f8a14ada1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.30", UID:"10.0.0.30", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.30 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.30"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 11, 40, 415593889, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 11, 41, 891226253, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.30.17b3334f8a14ada1" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:11:42.373987 kubelet[1520]: E0212 19:11:42.373950 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:11:43.374368 kubelet[1520]: E0212 19:11:43.374333 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:11:43.390828 kubelet[1520]: E0212 19:11:43.390795 1520 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "10.0.0.30" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:11:43.495925 kubelet[1520]: I0212 19:11:43.495899 1520 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.30" Feb 12 19:11:43.497234 kubelet[1520]: E0212 19:11:43.497209 1520 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.30" Feb 12 19:11:43.497344 kubelet[1520]: E0212 19:11:43.497207 1520 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.30.17b3334f8a148b1a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.30", UID:"10.0.0.30", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.30 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.30"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 11, 40, 415585050, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 11, 43, 495863966, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.30.17b3334f8a148b1a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:11:43.498353 kubelet[1520]: E0212 19:11:43.498273 1520 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.30.17b3334f8a149eac", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.30", UID:"10.0.0.30", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.30 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.30"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 11, 40, 415590060, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 11, 43, 495869027, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.30.17b3334f8a149eac" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:11:43.499156 kubelet[1520]: E0212 19:11:43.499095 1520 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.30.17b3334f8a14ada1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.30", UID:"10.0.0.30", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.30 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.30"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 11, 40, 415593889, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 11, 43, 495871902, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.30.17b3334f8a14ada1" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:11:43.557680 kubelet[1520]: W0212 19:11:43.557650 1520 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:11:43.557838 kubelet[1520]: E0212 19:11:43.557826 1520 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:11:43.730090 kubelet[1520]: W0212 19:11:43.730058 1520 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:11:43.730249 kubelet[1520]: E0212 19:11:43.730236 1520 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:11:44.303238 kubelet[1520]: W0212 19:11:44.303205 1520 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.30" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:11:44.303447 kubelet[1520]: E0212 19:11:44.303432 1520 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.30" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:11:44.375148 kubelet[1520]: E0212 19:11:44.375116 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:11:44.395714 kubelet[1520]: W0212 19:11:44.395677 1520 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:11:44.395714 kubelet[1520]: E0212 19:11:44.395709 1520 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:11:45.375953 kubelet[1520]: E0212 19:11:45.375918 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:11:46.376929 kubelet[1520]: E0212 19:11:46.376879 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:11:46.592537 kubelet[1520]: E0212 19:11:46.592490 1520 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "10.0.0.30" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:11:46.698065 kubelet[1520]: I0212 19:11:46.698025 1520 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.30" Feb 12 19:11:46.699355 kubelet[1520]: E0212 19:11:46.699261 1520 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.30.17b3334f8a148b1a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.30", UID:"10.0.0.30", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.30 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.30"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 11, 40, 415585050, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 11, 46, 697966329, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.30.17b3334f8a148b1a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:11:46.699493 kubelet[1520]: E0212 19:11:46.699335 1520 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.30" Feb 12 19:11:46.700297 kubelet[1520]: E0212 19:11:46.700230 1520 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.30.17b3334f8a149eac", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.30", UID:"10.0.0.30", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.30 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.30"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 11, 40, 415590060, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 11, 46, 697977903, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.30.17b3334f8a149eac" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:11:46.701502 kubelet[1520]: E0212 19:11:46.701447 1520 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.30.17b3334f8a14ada1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.30", UID:"10.0.0.30", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.30 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.30"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 11, 40, 415593889, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 11, 46, 697980766, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.30.17b3334f8a14ada1" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:11:47.182372 kubelet[1520]: W0212 19:11:47.182347 1520 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:11:47.182507 kubelet[1520]: E0212 19:11:47.182380 1520 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:11:47.377871 kubelet[1520]: E0212 19:11:47.377836 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:11:47.595172 kubelet[1520]: W0212 19:11:47.595076 1520 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.30" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:11:47.595358 kubelet[1520]: E0212 19:11:47.595341 1520 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.30" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:11:47.723177 kubelet[1520]: W0212 19:11:47.723132 1520 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:11:47.723177 kubelet[1520]: E0212 19:11:47.723180 1520 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:11:48.378886 kubelet[1520]: E0212 19:11:48.378851 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:11:49.380005 kubelet[1520]: E0212 19:11:49.379963 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:11:50.356869 kubelet[1520]: I0212 19:11:50.356821 1520 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 12 19:11:50.381075 kubelet[1520]: E0212 19:11:50.381033 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:11:50.428444 kubelet[1520]: E0212 19:11:50.428419 1520 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.30\" not found" Feb 12 19:11:50.725304 kubelet[1520]: E0212 19:11:50.725266 1520 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.30" not found Feb 12 19:11:51.381821 kubelet[1520]: E0212 19:11:51.381780 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:11:51.985392 kubelet[1520]: E0212 19:11:51.985368 1520 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.30" not found Feb 12 19:11:52.382879 kubelet[1520]: E0212 19:11:52.382783 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:11:52.997090 kubelet[1520]: E0212 19:11:52.997047 1520 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.30\" not found" node="10.0.0.30" Feb 12 19:11:53.101151 kubelet[1520]: I0212 19:11:53.101115 1520 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.30" Feb 12 19:11:53.384102 kubelet[1520]: E0212 19:11:53.384003 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:11:53.386926 kubelet[1520]: I0212 19:11:53.386898 1520 kubelet_node_status.go:73] "Successfully registered node" node="10.0.0.30" Feb 12 19:11:53.395015 kubelet[1520]: E0212 19:11:53.394975 1520 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Feb 12 19:11:53.495129 kubelet[1520]: E0212 19:11:53.495072 1520 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Feb 12 19:11:53.524635 sudo[1330]: pam_unix(sudo:session): session closed for user root Feb 12 19:11:53.526336 sshd[1324]: pam_unix(sshd:session): session closed for user core Feb 12 19:11:53.528690 systemd[1]: sshd@4-10.0.0.30:22-10.0.0.1:52316.service: Deactivated successfully. Feb 12 19:11:53.529726 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 19:11:53.529744 systemd-logind[1198]: Session 5 logged out. Waiting for processes to exit. Feb 12 19:11:53.530820 systemd-logind[1198]: Removed session 5. Feb 12 19:11:53.595894 kubelet[1520]: E0212 19:11:53.595858 1520 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Feb 12 19:11:53.696587 kubelet[1520]: E0212 19:11:53.696530 1520 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Feb 12 19:11:53.796970 kubelet[1520]: E0212 19:11:53.796913 1520 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Feb 12 19:11:53.897509 kubelet[1520]: E0212 19:11:53.897473 1520 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Feb 12 19:11:54.000065 kubelet[1520]: E0212 19:11:53.997853 1520 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Feb 12 19:11:54.098837 kubelet[1520]: E0212 19:11:54.098774 1520 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Feb 12 19:11:54.199645 kubelet[1520]: E0212 19:11:54.199594 1520 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Feb 12 19:11:54.300427 kubelet[1520]: E0212 19:11:54.300313 1520 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Feb 12 19:11:54.384210 kubelet[1520]: E0212 19:11:54.384159 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:11:54.401391 kubelet[1520]: E0212 19:11:54.401351 1520 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Feb 12 19:11:54.502266 kubelet[1520]: E0212 19:11:54.502216 1520 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Feb 12 19:11:54.603021 kubelet[1520]: E0212 19:11:54.602901 1520 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Feb 12 19:11:54.703133 kubelet[1520]: E0212 19:11:54.703088 1520 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Feb 12 19:11:54.804118 kubelet[1520]: E0212 19:11:54.804069 1520 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Feb 12 19:11:54.905198 kubelet[1520]: E0212 19:11:54.905090 1520 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Feb 12 19:11:55.005966 kubelet[1520]: E0212 19:11:55.005914 1520 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Feb 12 19:11:55.106872 kubelet[1520]: E0212 19:11:55.106816 1520 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Feb 12 19:11:55.207665 kubelet[1520]: E0212 19:11:55.207618 1520 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Feb 12 19:11:55.308231 kubelet[1520]: E0212 19:11:55.308186 1520 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Feb 12 19:11:55.385147 kubelet[1520]: E0212 19:11:55.385106 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:11:55.408730 kubelet[1520]: E0212 19:11:55.408696 1520 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Feb 12 19:11:55.510363 kubelet[1520]: E0212 19:11:55.510234 1520 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Feb 12 19:11:55.610475 kubelet[1520]: E0212 19:11:55.610415 1520 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Feb 12 19:11:55.711054 kubelet[1520]: E0212 19:11:55.711005 1520 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Feb 12 19:11:55.813893 kubelet[1520]: E0212 19:11:55.813743 1520 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Feb 12 19:11:55.914699 kubelet[1520]: E0212 19:11:55.914652 1520 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Feb 12 19:11:56.015902 kubelet[1520]: E0212 19:11:56.015855 1520 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Feb 12 19:11:56.116661 kubelet[1520]: E0212 19:11:56.116555 1520 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Feb 12 19:11:56.217143 kubelet[1520]: E0212 19:11:56.217102 1520 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Feb 12 19:11:56.317771 kubelet[1520]: E0212 19:11:56.317735 1520 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Feb 12 19:11:56.385452 kubelet[1520]: E0212 19:11:56.385359 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:11:56.418773 kubelet[1520]: E0212 19:11:56.418737 1520 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Feb 12 19:11:56.519555 kubelet[1520]: E0212 19:11:56.519518 1520 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Feb 12 19:11:56.619945 kubelet[1520]: E0212 19:11:56.619889 1520 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Feb 12 19:11:56.720739 kubelet[1520]: E0212 19:11:56.720681 1520 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Feb 12 19:11:56.821478 kubelet[1520]: E0212 19:11:56.821440 1520 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Feb 12 19:11:56.922131 kubelet[1520]: E0212 19:11:56.922096 1520 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Feb 12 19:11:57.023463 kubelet[1520]: I0212 19:11:57.023358 1520 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 12 19:11:57.023915 env[1213]: time="2024-02-12T19:11:57.023873184Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 19:11:57.024395 kubelet[1520]: I0212 19:11:57.024015 1520 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 12 19:11:57.384997 kubelet[1520]: I0212 19:11:57.384850 1520 apiserver.go:52] "Watching apiserver" Feb 12 19:11:57.385961 kubelet[1520]: E0212 19:11:57.385911 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:11:57.389514 kubelet[1520]: I0212 19:11:57.389488 1520 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:11:57.389590 kubelet[1520]: I0212 19:11:57.389552 1520 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:11:57.484308 kubelet[1520]: I0212 19:11:57.484261 1520 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 19:11:57.495060 kubelet[1520]: I0212 19:11:57.494938 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-cilium-config-path\") pod \"cilium-9ng4k\" (UID: \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\") " pod="kube-system/cilium-9ng4k" Feb 12 19:11:57.495060 kubelet[1520]: I0212 19:11:57.494993 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68da87dc-3c93-4cb8-a1d6-fca0614ffe50-lib-modules\") pod \"kube-proxy-mk8lb\" (UID: \"68da87dc-3c93-4cb8-a1d6-fca0614ffe50\") " pod="kube-system/kube-proxy-mk8lb" Feb 12 19:11:57.495060 kubelet[1520]: I0212 19:11:57.495016 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-lib-modules\") pod \"cilium-9ng4k\" (UID: \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\") " pod="kube-system/cilium-9ng4k" Feb 12 19:11:57.495196 kubelet[1520]: I0212 19:11:57.495073 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-cni-path\") pod \"cilium-9ng4k\" (UID: \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\") " pod="kube-system/cilium-9ng4k" Feb 12 19:11:57.495196 kubelet[1520]: I0212 19:11:57.495110 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-clustermesh-secrets\") pod \"cilium-9ng4k\" (UID: \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\") " pod="kube-system/cilium-9ng4k" Feb 12 19:11:57.495196 kubelet[1520]: I0212 19:11:57.495152 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr9gv\" (UniqueName: \"kubernetes.io/projected/68da87dc-3c93-4cb8-a1d6-fca0614ffe50-kube-api-access-lr9gv\") pod \"kube-proxy-mk8lb\" (UID: \"68da87dc-3c93-4cb8-a1d6-fca0614ffe50\") " pod="kube-system/kube-proxy-mk8lb" Feb 12 19:11:57.495196 kubelet[1520]: I0212 19:11:57.495172 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-hostproc\") pod \"cilium-9ng4k\" (UID: \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\") " pod="kube-system/cilium-9ng4k" Feb 12 19:11:57.495196 kubelet[1520]: I0212 19:11:57.495192 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-etc-cni-netd\") pod \"cilium-9ng4k\" (UID: \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\") " pod="kube-system/cilium-9ng4k" Feb 12 19:11:57.495366 kubelet[1520]: I0212 19:11:57.495210 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-host-proc-sys-kernel\") pod \"cilium-9ng4k\" (UID: \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\") " pod="kube-system/cilium-9ng4k" Feb 12 19:11:57.495366 kubelet[1520]: I0212 19:11:57.495246 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2fsd\" (UniqueName: \"kubernetes.io/projected/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-kube-api-access-q2fsd\") pod \"cilium-9ng4k\" (UID: \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\") " pod="kube-system/cilium-9ng4k" Feb 12 19:11:57.495366 kubelet[1520]: I0212 19:11:57.495265 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-cilium-run\") pod \"cilium-9ng4k\" (UID: \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\") " pod="kube-system/cilium-9ng4k" Feb 12 19:11:57.495366 kubelet[1520]: I0212 19:11:57.495281 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-bpf-maps\") pod \"cilium-9ng4k\" (UID: \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\") " pod="kube-system/cilium-9ng4k" Feb 12 19:11:57.495366 kubelet[1520]: I0212 19:11:57.495302 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-cilium-cgroup\") pod \"cilium-9ng4k\" (UID: \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\") " pod="kube-system/cilium-9ng4k" Feb 12 19:11:57.495366 kubelet[1520]: I0212 19:11:57.495344 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-xtables-lock\") pod \"cilium-9ng4k\" (UID: \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\") " pod="kube-system/cilium-9ng4k" Feb 12 19:11:57.495495 kubelet[1520]: I0212 19:11:57.495365 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-host-proc-sys-net\") pod \"cilium-9ng4k\" (UID: \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\") " pod="kube-system/cilium-9ng4k" Feb 12 19:11:57.495495 kubelet[1520]: I0212 19:11:57.495389 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-hubble-tls\") pod \"cilium-9ng4k\" (UID: \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\") " pod="kube-system/cilium-9ng4k" Feb 12 19:11:57.495495 kubelet[1520]: I0212 19:11:57.495429 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/68da87dc-3c93-4cb8-a1d6-fca0614ffe50-kube-proxy\") pod \"kube-proxy-mk8lb\" (UID: \"68da87dc-3c93-4cb8-a1d6-fca0614ffe50\") " pod="kube-system/kube-proxy-mk8lb" Feb 12 19:11:57.495495 kubelet[1520]: I0212 19:11:57.495464 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68da87dc-3c93-4cb8-a1d6-fca0614ffe50-xtables-lock\") pod \"kube-proxy-mk8lb\" (UID: \"68da87dc-3c93-4cb8-a1d6-fca0614ffe50\") " pod="kube-system/kube-proxy-mk8lb" Feb 12 19:11:57.495495 kubelet[1520]: I0212 19:11:57.495479 1520 reconciler.go:41] "Reconciler: start to sync state" Feb 12 19:11:57.694024 kubelet[1520]: E0212 19:11:57.693995 1520 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:11:57.695747 env[1213]: time="2024-02-12T19:11:57.694985950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mk8lb,Uid:68da87dc-3c93-4cb8-a1d6-fca0614ffe50,Namespace:kube-system,Attempt:0,}" Feb 12 19:11:57.992813 kubelet[1520]: E0212 19:11:57.992683 1520 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:11:57.993233 env[1213]: time="2024-02-12T19:11:57.993190184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9ng4k,Uid:e1847f80-f1f1-48a2-aa2a-cda00e5f14f2,Namespace:kube-system,Attempt:0,}" Feb 12 19:11:58.386492 kubelet[1520]: E0212 19:11:58.386391 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:11:58.465887 env[1213]: time="2024-02-12T19:11:58.460768291Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:11:58.467865 env[1213]: time="2024-02-12T19:11:58.467826795Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:11:58.468604 env[1213]: time="2024-02-12T19:11:58.468544502Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:11:58.471009 env[1213]: time="2024-02-12T19:11:58.470957249Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:11:58.471986 env[1213]: time="2024-02-12T19:11:58.471893196Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:11:58.473279 env[1213]: time="2024-02-12T19:11:58.473220990Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:11:58.475112 env[1213]: time="2024-02-12T19:11:58.475079982Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:11:58.477457 env[1213]: time="2024-02-12T19:11:58.477419007Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:11:58.509297 env[1213]: time="2024-02-12T19:11:58.509228175Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:11:58.509434 env[1213]: time="2024-02-12T19:11:58.509305703Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:11:58.509434 env[1213]: time="2024-02-12T19:11:58.509348053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:11:58.509629 env[1213]: time="2024-02-12T19:11:58.509594700Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/43afd43fae2e1a9fb696e965f83235fc8f3699e30d03ba88f0c0e409966ac791 pid=1616 runtime=io.containerd.runc.v2 Feb 12 19:11:58.510239 env[1213]: time="2024-02-12T19:11:58.510178385Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:11:58.510332 env[1213]: time="2024-02-12T19:11:58.510238204Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:11:58.510332 env[1213]: time="2024-02-12T19:11:58.510266450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:11:58.510525 env[1213]: time="2024-02-12T19:11:58.510469746Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/68ecd8afdc4e45548da96c50eb5b64f9a8fd0198c037d11e620161b4b89ba1ee pid=1629 runtime=io.containerd.runc.v2 Feb 12 19:11:58.569783 env[1213]: time="2024-02-12T19:11:58.569741337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9ng4k,Uid:e1847f80-f1f1-48a2-aa2a-cda00e5f14f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"68ecd8afdc4e45548da96c50eb5b64f9a8fd0198c037d11e620161b4b89ba1ee\"" Feb 12 19:11:58.571554 kubelet[1520]: E0212 19:11:58.571532 1520 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:11:58.572845 env[1213]: time="2024-02-12T19:11:58.572809728Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 12 19:11:58.575829 env[1213]: time="2024-02-12T19:11:58.575730355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mk8lb,Uid:68da87dc-3c93-4cb8-a1d6-fca0614ffe50,Namespace:kube-system,Attempt:0,} returns sandbox id \"43afd43fae2e1a9fb696e965f83235fc8f3699e30d03ba88f0c0e409966ac791\"" Feb 12 19:11:58.576510 kubelet[1520]: E0212 19:11:58.576492 1520 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:11:58.603721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2979326455.mount: Deactivated successfully. Feb 12 19:11:59.387779 kubelet[1520]: E0212 19:11:59.387527 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:00.372487 kubelet[1520]: E0212 19:12:00.372442 1520 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:00.388683 kubelet[1520]: E0212 19:12:00.388636 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:01.389079 kubelet[1520]: E0212 19:12:01.389014 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:02.337572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4052125819.mount: Deactivated successfully. Feb 12 19:12:02.389541 kubelet[1520]: E0212 19:12:02.389485 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:03.390661 kubelet[1520]: E0212 19:12:03.390615 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:04.391367 kubelet[1520]: E0212 19:12:04.391312 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:04.803668 env[1213]: time="2024-02-12T19:12:04.803611751Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:12:04.804815 env[1213]: time="2024-02-12T19:12:04.804779218Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:12:04.806406 env[1213]: time="2024-02-12T19:12:04.806371640Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:12:04.807042 env[1213]: time="2024-02-12T19:12:04.807006591Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 12 19:12:04.808651 env[1213]: time="2024-02-12T19:12:04.808620109Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 12 19:12:04.809502 env[1213]: time="2024-02-12T19:12:04.809454168Z" level=info msg="CreateContainer within sandbox \"68ecd8afdc4e45548da96c50eb5b64f9a8fd0198c037d11e620161b4b89ba1ee\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:12:04.820224 env[1213]: time="2024-02-12T19:12:04.820178569Z" level=info msg="CreateContainer within sandbox \"68ecd8afdc4e45548da96c50eb5b64f9a8fd0198c037d11e620161b4b89ba1ee\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d8e90bd4a5c475696482bc5e17c1089837a1ee68e18779cd35213528adc05c13\"" Feb 12 19:12:04.820892 env[1213]: time="2024-02-12T19:12:04.820865679Z" level=info msg="StartContainer for \"d8e90bd4a5c475696482bc5e17c1089837a1ee68e18779cd35213528adc05c13\"" Feb 12 19:12:04.879751 env[1213]: time="2024-02-12T19:12:04.879700996Z" level=info msg="StartContainer for \"d8e90bd4a5c475696482bc5e17c1089837a1ee68e18779cd35213528adc05c13\" returns successfully" Feb 12 19:12:05.031620 env[1213]: time="2024-02-12T19:12:05.031569898Z" level=info msg="shim disconnected" id=d8e90bd4a5c475696482bc5e17c1089837a1ee68e18779cd35213528adc05c13 Feb 12 19:12:05.031858 env[1213]: time="2024-02-12T19:12:05.031838513Z" level=warning msg="cleaning up after shim disconnected" id=d8e90bd4a5c475696482bc5e17c1089837a1ee68e18779cd35213528adc05c13 namespace=k8s.io Feb 12 19:12:05.031924 env[1213]: time="2024-02-12T19:12:05.031910800Z" level=info msg="cleaning up dead shim" Feb 12 19:12:05.039635 env[1213]: time="2024-02-12T19:12:05.039595832Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:12:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1744 runtime=io.containerd.runc.v2\n" Feb 12 19:12:05.392153 kubelet[1520]: E0212 19:12:05.392095 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:05.590099 kubelet[1520]: E0212 19:12:05.590046 1520 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:12:05.592619 env[1213]: time="2024-02-12T19:12:05.592492481Z" level=info msg="CreateContainer within sandbox \"68ecd8afdc4e45548da96c50eb5b64f9a8fd0198c037d11e620161b4b89ba1ee\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 19:12:05.615413 env[1213]: time="2024-02-12T19:12:05.615362978Z" level=info msg="CreateContainer within sandbox \"68ecd8afdc4e45548da96c50eb5b64f9a8fd0198c037d11e620161b4b89ba1ee\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"af355eeeb8b336912859033a4b27d4921d3097c8d976515576021059ac20c666\"" Feb 12 19:12:05.616255 env[1213]: time="2024-02-12T19:12:05.616223617Z" level=info msg="StartContainer for \"af355eeeb8b336912859033a4b27d4921d3097c8d976515576021059ac20c666\"" Feb 12 19:12:05.689520 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:12:05.689977 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:12:05.690151 systemd[1]: Stopping systemd-sysctl.service... Feb 12 19:12:05.691695 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:12:05.700138 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:12:05.707391 env[1213]: time="2024-02-12T19:12:05.707341694Z" level=info msg="StartContainer for \"af355eeeb8b336912859033a4b27d4921d3097c8d976515576021059ac20c666\" returns successfully" Feb 12 19:12:05.738591 env[1213]: time="2024-02-12T19:12:05.738540357Z" level=info msg="shim disconnected" id=af355eeeb8b336912859033a4b27d4921d3097c8d976515576021059ac20c666 Feb 12 19:12:05.738591 env[1213]: time="2024-02-12T19:12:05.738589193Z" level=warning msg="cleaning up after shim disconnected" id=af355eeeb8b336912859033a4b27d4921d3097c8d976515576021059ac20c666 namespace=k8s.io Feb 12 19:12:05.738591 env[1213]: time="2024-02-12T19:12:05.738600121Z" level=info msg="cleaning up dead shim" Feb 12 19:12:05.745358 env[1213]: time="2024-02-12T19:12:05.745294429Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:12:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1808 runtime=io.containerd.runc.v2\n" Feb 12 19:12:05.816419 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8e90bd4a5c475696482bc5e17c1089837a1ee68e18779cd35213528adc05c13-rootfs.mount: Deactivated successfully. Feb 12 19:12:06.029406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1602859802.mount: Deactivated successfully. Feb 12 19:12:06.392796 kubelet[1520]: E0212 19:12:06.392589 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:06.592813 kubelet[1520]: E0212 19:12:06.592783 1520 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:12:06.594739 env[1213]: time="2024-02-12T19:12:06.594693494Z" level=info msg="CreateContainer within sandbox \"68ecd8afdc4e45548da96c50eb5b64f9a8fd0198c037d11e620161b4b89ba1ee\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 19:12:06.597432 env[1213]: time="2024-02-12T19:12:06.597380783Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:12:06.600134 env[1213]: time="2024-02-12T19:12:06.600095613Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:12:06.601712 env[1213]: time="2024-02-12T19:12:06.601677289Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:12:06.603268 env[1213]: time="2024-02-12T19:12:06.603230222Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:12:06.604242 env[1213]: time="2024-02-12T19:12:06.604208452Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926\"" Feb 12 19:12:06.605859 env[1213]: time="2024-02-12T19:12:06.605815188Z" level=info msg="CreateContainer within sandbox \"43afd43fae2e1a9fb696e965f83235fc8f3699e30d03ba88f0c0e409966ac791\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 19:12:06.608975 env[1213]: time="2024-02-12T19:12:06.608933464Z" level=info msg="CreateContainer within sandbox \"68ecd8afdc4e45548da96c50eb5b64f9a8fd0198c037d11e620161b4b89ba1ee\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4be20a7bf574c884281c0cbd5cf3fea34e5f8dcfaab5cd14ee3ec84745c87944\"" Feb 12 19:12:06.609853 env[1213]: time="2024-02-12T19:12:06.609813294Z" level=info msg="StartContainer for \"4be20a7bf574c884281c0cbd5cf3fea34e5f8dcfaab5cd14ee3ec84745c87944\"" Feb 12 19:12:06.621134 env[1213]: time="2024-02-12T19:12:06.621084909Z" level=info msg="CreateContainer within sandbox \"43afd43fae2e1a9fb696e965f83235fc8f3699e30d03ba88f0c0e409966ac791\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8cdb85c6275292ddaa4a370f206a99f5574cdc4ba5743e0913d8e4e508bc6136\"" Feb 12 19:12:06.621975 env[1213]: time="2024-02-12T19:12:06.621951048Z" level=info msg="StartContainer for \"8cdb85c6275292ddaa4a370f206a99f5574cdc4ba5743e0913d8e4e508bc6136\"" Feb 12 19:12:06.703374 env[1213]: time="2024-02-12T19:12:06.702478664Z" level=info msg="StartContainer for \"4be20a7bf574c884281c0cbd5cf3fea34e5f8dcfaab5cd14ee3ec84745c87944\" returns successfully" Feb 12 19:12:06.708050 env[1213]: time="2024-02-12T19:12:06.707980103Z" level=info msg="StartContainer for \"8cdb85c6275292ddaa4a370f206a99f5574cdc4ba5743e0913d8e4e508bc6136\" returns successfully" Feb 12 19:12:06.806585 env[1213]: time="2024-02-12T19:12:06.806539908Z" level=info msg="shim disconnected" id=4be20a7bf574c884281c0cbd5cf3fea34e5f8dcfaab5cd14ee3ec84745c87944 Feb 12 19:12:06.806585 env[1213]: time="2024-02-12T19:12:06.806583263Z" level=warning msg="cleaning up after shim disconnected" id=4be20a7bf574c884281c0cbd5cf3fea34e5f8dcfaab5cd14ee3ec84745c87944 namespace=k8s.io Feb 12 19:12:06.806867 env[1213]: time="2024-02-12T19:12:06.806608604Z" level=info msg="cleaning up dead shim" Feb 12 19:12:06.812940 env[1213]: time="2024-02-12T19:12:06.812896317Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:12:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1896 runtime=io.containerd.runc.v2\n" Feb 12 19:12:06.816460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount186858378.mount: Deactivated successfully. Feb 12 19:12:07.393475 kubelet[1520]: E0212 19:12:07.393438 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:07.596695 kubelet[1520]: E0212 19:12:07.596333 1520 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:12:07.598700 kubelet[1520]: E0212 19:12:07.598678 1520 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:12:07.600771 env[1213]: time="2024-02-12T19:12:07.600732631Z" level=info msg="CreateContainer within sandbox \"68ecd8afdc4e45548da96c50eb5b64f9a8fd0198c037d11e620161b4b89ba1ee\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 19:12:07.610990 kubelet[1520]: I0212 19:12:07.610920 1520 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-mk8lb" podStartSLOduration=-9.223372022243902e+09 pod.CreationTimestamp="2024-02-12 19:11:53 +0000 UTC" firstStartedPulling="2024-02-12 19:11:58.576821638 +0000 UTC m=+19.061278074" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:12:07.61075703 +0000 UTC m=+28.095213466" watchObservedRunningTime="2024-02-12 19:12:07.610873199 +0000 UTC m=+28.095329595" Feb 12 19:12:07.622249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3154720207.mount: Deactivated successfully. Feb 12 19:12:07.626601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2584371814.mount: Deactivated successfully. Feb 12 19:12:07.628921 env[1213]: time="2024-02-12T19:12:07.628874157Z" level=info msg="CreateContainer within sandbox \"68ecd8afdc4e45548da96c50eb5b64f9a8fd0198c037d11e620161b4b89ba1ee\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cf429d8a19c2796f60a1011392d851117b320ad680bc705ed8564facfe3596f2\"" Feb 12 19:12:07.629729 env[1213]: time="2024-02-12T19:12:07.629697225Z" level=info msg="StartContainer for \"cf429d8a19c2796f60a1011392d851117b320ad680bc705ed8564facfe3596f2\"" Feb 12 19:12:07.691394 env[1213]: time="2024-02-12T19:12:07.691325430Z" level=info msg="StartContainer for \"cf429d8a19c2796f60a1011392d851117b320ad680bc705ed8564facfe3596f2\" returns successfully" Feb 12 19:12:07.710002 env[1213]: time="2024-02-12T19:12:07.709956869Z" level=info msg="shim disconnected" id=cf429d8a19c2796f60a1011392d851117b320ad680bc705ed8564facfe3596f2 Feb 12 19:12:07.710002 env[1213]: time="2024-02-12T19:12:07.710000783Z" level=warning msg="cleaning up after shim disconnected" id=cf429d8a19c2796f60a1011392d851117b320ad680bc705ed8564facfe3596f2 namespace=k8s.io Feb 12 19:12:07.710002 env[1213]: time="2024-02-12T19:12:07.710010230Z" level=info msg="cleaning up dead shim" Feb 12 19:12:07.716842 env[1213]: time="2024-02-12T19:12:07.716801766Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:12:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2086 runtime=io.containerd.runc.v2\n" Feb 12 19:12:08.394117 kubelet[1520]: E0212 19:12:08.394067 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:08.606627 kubelet[1520]: E0212 19:12:08.606592 1520 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:12:08.606627 kubelet[1520]: E0212 19:12:08.606618 1520 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:12:08.609081 env[1213]: time="2024-02-12T19:12:08.609038573Z" level=info msg="CreateContainer within sandbox \"68ecd8afdc4e45548da96c50eb5b64f9a8fd0198c037d11e620161b4b89ba1ee\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 19:12:08.655987 env[1213]: time="2024-02-12T19:12:08.655675756Z" level=info msg="CreateContainer within sandbox \"68ecd8afdc4e45548da96c50eb5b64f9a8fd0198c037d11e620161b4b89ba1ee\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e22d24c2ea220d9a9d6d473abf8f409ea332ab0f405126a04cc7cd23cb197084\"" Feb 12 19:12:08.656449 env[1213]: time="2024-02-12T19:12:08.656419611Z" level=info msg="StartContainer for \"e22d24c2ea220d9a9d6d473abf8f409ea332ab0f405126a04cc7cd23cb197084\"" Feb 12 19:12:08.724722 env[1213]: time="2024-02-12T19:12:08.715424540Z" level=info msg="StartContainer for \"e22d24c2ea220d9a9d6d473abf8f409ea332ab0f405126a04cc7cd23cb197084\" returns successfully" Feb 12 19:12:08.847603 kubelet[1520]: I0212 19:12:08.847558 1520 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 19:12:08.963353 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 12 19:12:09.213339 kernel: Initializing XFRM netlink socket Feb 12 19:12:09.216337 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 12 19:12:09.395170 kubelet[1520]: E0212 19:12:09.395118 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:09.611028 kubelet[1520]: E0212 19:12:09.610998 1520 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:12:09.625083 kubelet[1520]: I0212 19:12:09.625049 1520 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-9ng4k" podStartSLOduration=-9.223372020229773e+09 pod.CreationTimestamp="2024-02-12 19:11:53 +0000 UTC" firstStartedPulling="2024-02-12 19:11:58.572363831 +0000 UTC m=+19.056820267" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:12:09.62442989 +0000 UTC m=+30.108886326" watchObservedRunningTime="2024-02-12 19:12:09.62500276 +0000 UTC m=+30.109459196" Feb 12 19:12:10.263797 kubelet[1520]: I0212 19:12:10.263671 1520 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:12:10.284382 kubelet[1520]: I0212 19:12:10.284336 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x27jd\" (UniqueName: \"kubernetes.io/projected/9e18a911-bbfb-45b9-9ba7-a7bce631a5b6-kube-api-access-x27jd\") pod \"nginx-deployment-8ffc5cf85-wf6dp\" (UID: \"9e18a911-bbfb-45b9-9ba7-a7bce631a5b6\") " pod="default/nginx-deployment-8ffc5cf85-wf6dp" Feb 12 19:12:10.396354 kubelet[1520]: E0212 19:12:10.395835 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:10.422453 systemd-networkd[1109]: cilium_host: Link UP Feb 12 19:12:10.425832 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 12 19:12:10.425921 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 12 19:12:10.423139 systemd-networkd[1109]: cilium_net: Link UP Feb 12 19:12:10.424987 systemd-networkd[1109]: cilium_net: Gained carrier Feb 12 19:12:10.425167 systemd-networkd[1109]: cilium_host: Gained carrier Feb 12 19:12:10.491432 systemd-networkd[1109]: cilium_net: Gained IPv6LL Feb 12 19:12:10.511694 systemd-networkd[1109]: cilium_vxlan: Link UP Feb 12 19:12:10.511700 systemd-networkd[1109]: cilium_vxlan: Gained carrier Feb 12 19:12:10.567787 env[1213]: time="2024-02-12T19:12:10.567284397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-wf6dp,Uid:9e18a911-bbfb-45b9-9ba7-a7bce631a5b6,Namespace:default,Attempt:0,}" Feb 12 19:12:10.613164 kubelet[1520]: E0212 19:12:10.612432 1520 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:12:10.808445 kernel: NET: Registered PF_ALG protocol family Feb 12 19:12:10.875459 systemd-networkd[1109]: cilium_host: Gained IPv6LL Feb 12 19:12:11.396224 kubelet[1520]: E0212 19:12:11.396174 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:11.412077 systemd-networkd[1109]: lxc_health: Link UP Feb 12 19:12:11.420900 systemd-networkd[1109]: lxc_health: Gained carrier Feb 12 19:12:11.421365 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 19:12:11.614128 kubelet[1520]: E0212 19:12:11.614073 1520 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:12:11.631397 systemd-networkd[1109]: lxc77f539592dc3: Link UP Feb 12 19:12:11.645605 kernel: eth0: renamed from tmp404bd Feb 12 19:12:11.650718 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:12:11.650823 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc77f539592dc3: link becomes ready Feb 12 19:12:11.650775 systemd-networkd[1109]: lxc77f539592dc3: Gained carrier Feb 12 19:12:12.020449 systemd-networkd[1109]: cilium_vxlan: Gained IPv6LL Feb 12 19:12:12.396694 kubelet[1520]: E0212 19:12:12.396546 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:12.615840 kubelet[1520]: E0212 19:12:12.615802 1520 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:12:12.659458 systemd-networkd[1109]: lxc_health: Gained IPv6LL Feb 12 19:12:12.851474 systemd-networkd[1109]: lxc77f539592dc3: Gained IPv6LL Feb 12 19:12:13.397482 kubelet[1520]: E0212 19:12:13.397440 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:13.617609 kubelet[1520]: E0212 19:12:13.617577 1520 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:12:14.397814 kubelet[1520]: E0212 19:12:14.397763 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:14.619601 kubelet[1520]: E0212 19:12:14.619550 1520 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:12:15.293202 env[1213]: time="2024-02-12T19:12:15.283738664Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:12:15.293202 env[1213]: time="2024-02-12T19:12:15.283787328Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:12:15.293202 env[1213]: time="2024-02-12T19:12:15.283801175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:12:15.293202 env[1213]: time="2024-02-12T19:12:15.283991388Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/404bde6141cef0720b33ae1a2e11e06598800d8509ac8de0eef9258cdb6377e2 pid=2616 runtime=io.containerd.runc.v2 Feb 12 19:12:15.380962 systemd-resolved[1155]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 19:12:15.397919 kubelet[1520]: E0212 19:12:15.397885 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:15.398203 env[1213]: time="2024-02-12T19:12:15.398161730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-wf6dp,Uid:9e18a911-bbfb-45b9-9ba7-a7bce631a5b6,Namespace:default,Attempt:0,} returns sandbox id \"404bde6141cef0720b33ae1a2e11e06598800d8509ac8de0eef9258cdb6377e2\"" Feb 12 19:12:15.399765 env[1213]: time="2024-02-12T19:12:15.399722817Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 19:12:16.398065 kubelet[1520]: E0212 19:12:16.398025 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:17.399013 kubelet[1520]: E0212 19:12:17.398957 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:17.656637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount137771031.mount: Deactivated successfully. Feb 12 19:12:18.396864 env[1213]: time="2024-02-12T19:12:18.396811555Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:12:18.398098 env[1213]: time="2024-02-12T19:12:18.398058199Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:12:18.399630 kubelet[1520]: E0212 19:12:18.399596 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:18.399907 env[1213]: time="2024-02-12T19:12:18.399683762Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:12:18.401496 env[1213]: time="2024-02-12T19:12:18.401465551Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:12:18.402294 env[1213]: time="2024-02-12T19:12:18.402262886Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\"" Feb 12 19:12:18.404224 env[1213]: time="2024-02-12T19:12:18.404191697Z" level=info msg="CreateContainer within sandbox \"404bde6141cef0720b33ae1a2e11e06598800d8509ac8de0eef9258cdb6377e2\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 12 19:12:18.415068 env[1213]: time="2024-02-12T19:12:18.415028213Z" level=info msg="CreateContainer within sandbox \"404bde6141cef0720b33ae1a2e11e06598800d8509ac8de0eef9258cdb6377e2\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"e787f99eee2d2b1106afc3a1eab02b94bc7cac18301eafe62a14a73075afd9c5\"" Feb 12 19:12:18.415605 env[1213]: time="2024-02-12T19:12:18.415568200Z" level=info msg="StartContainer for \"e787f99eee2d2b1106afc3a1eab02b94bc7cac18301eafe62a14a73075afd9c5\"" Feb 12 19:12:18.472099 env[1213]: time="2024-02-12T19:12:18.470406415Z" level=info msg="StartContainer for \"e787f99eee2d2b1106afc3a1eab02b94bc7cac18301eafe62a14a73075afd9c5\" returns successfully" Feb 12 19:12:18.635601 kubelet[1520]: I0212 19:12:18.635557 1520 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-8ffc5cf85-wf6dp" podStartSLOduration=-9.223372028219254e+09 pod.CreationTimestamp="2024-02-12 19:12:10 +0000 UTC" firstStartedPulling="2024-02-12 19:12:15.399249024 +0000 UTC m=+35.883705460" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:12:18.635218546 +0000 UTC m=+39.119674982" watchObservedRunningTime="2024-02-12 19:12:18.635521633 +0000 UTC m=+39.119978149" Feb 12 19:12:18.656722 systemd[1]: run-containerd-runc-k8s.io-e787f99eee2d2b1106afc3a1eab02b94bc7cac18301eafe62a14a73075afd9c5-runc.kAV9bm.mount: Deactivated successfully. Feb 12 19:12:19.014121 update_engine[1201]: I0212 19:12:19.013744 1201 update_attempter.cc:509] Updating boot flags... Feb 12 19:12:19.400021 kubelet[1520]: E0212 19:12:19.399895 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:20.373159 kubelet[1520]: E0212 19:12:20.373111 1520 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:20.400309 kubelet[1520]: E0212 19:12:20.400262 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:21.401496 kubelet[1520]: E0212 19:12:21.401458 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:22.034023 kubelet[1520]: I0212 19:12:22.033983 1520 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:12:22.056255 kubelet[1520]: I0212 19:12:22.056227 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/4839e711-fe79-4372-8aef-c3e238c570cc-data\") pod \"nfs-server-provisioner-0\" (UID: \"4839e711-fe79-4372-8aef-c3e238c570cc\") " pod="default/nfs-server-provisioner-0" Feb 12 19:12:22.056512 kubelet[1520]: I0212 19:12:22.056493 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffmgs\" (UniqueName: \"kubernetes.io/projected/4839e711-fe79-4372-8aef-c3e238c570cc-kube-api-access-ffmgs\") pod \"nfs-server-provisioner-0\" (UID: \"4839e711-fe79-4372-8aef-c3e238c570cc\") " pod="default/nfs-server-provisioner-0" Feb 12 19:12:22.342780 env[1213]: time="2024-02-12T19:12:22.342400622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:4839e711-fe79-4372-8aef-c3e238c570cc,Namespace:default,Attempt:0,}" Feb 12 19:12:22.365176 systemd-networkd[1109]: lxca936909bbc6c: Link UP Feb 12 19:12:22.375377 kernel: eth0: renamed from tmp3b7ef Feb 12 19:12:22.393265 systemd-networkd[1109]: lxca936909bbc6c: Gained carrier Feb 12 19:12:22.393458 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:12:22.393488 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca936909bbc6c: link becomes ready Feb 12 19:12:22.404350 kubelet[1520]: E0212 19:12:22.402988 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:22.585489 env[1213]: time="2024-02-12T19:12:22.585417230Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:12:22.585489 env[1213]: time="2024-02-12T19:12:22.585459005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:12:22.585643 env[1213]: time="2024-02-12T19:12:22.585470209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:12:22.585643 env[1213]: time="2024-02-12T19:12:22.585618220Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3b7efd8d22ba83d88e5ade530833883223fb4de8aa1b0468451c4c5a7a953472 pid=2802 runtime=io.containerd.runc.v2 Feb 12 19:12:22.621796 systemd-resolved[1155]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 19:12:22.640003 env[1213]: time="2024-02-12T19:12:22.639957537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:4839e711-fe79-4372-8aef-c3e238c570cc,Namespace:default,Attempt:0,} returns sandbox id \"3b7efd8d22ba83d88e5ade530833883223fb4de8aa1b0468451c4c5a7a953472\"" Feb 12 19:12:22.641513 env[1213]: time="2024-02-12T19:12:22.641485945Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 12 19:12:23.403770 kubelet[1520]: E0212 19:12:23.403692 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:23.667796 systemd-networkd[1109]: lxca936909bbc6c: Gained IPv6LL Feb 12 19:12:24.404628 kubelet[1520]: E0212 19:12:24.404587 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:24.780055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3608789498.mount: Deactivated successfully. Feb 12 19:12:25.404712 kubelet[1520]: E0212 19:12:25.404673 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:26.405571 kubelet[1520]: E0212 19:12:26.405523 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:26.573829 env[1213]: time="2024-02-12T19:12:26.573782486Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:12:26.575888 env[1213]: time="2024-02-12T19:12:26.575857202Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:12:26.577422 env[1213]: time="2024-02-12T19:12:26.577392322Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:12:26.579580 env[1213]: time="2024-02-12T19:12:26.579549702Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:12:26.580214 env[1213]: time="2024-02-12T19:12:26.580183003Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Feb 12 19:12:26.582305 env[1213]: time="2024-02-12T19:12:26.582265721Z" level=info msg="CreateContainer within sandbox \"3b7efd8d22ba83d88e5ade530833883223fb4de8aa1b0468451c4c5a7a953472\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 12 19:12:26.590474 env[1213]: time="2024-02-12T19:12:26.590427464Z" level=info msg="CreateContainer within sandbox \"3b7efd8d22ba83d88e5ade530833883223fb4de8aa1b0468451c4c5a7a953472\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"b44973c1dceac271093d8eb86e78bf2407b83a0af1c784e743e3e470a323240b\"" Feb 12 19:12:26.590890 env[1213]: time="2024-02-12T19:12:26.590862429Z" level=info msg="StartContainer for \"b44973c1dceac271093d8eb86e78bf2407b83a0af1c784e743e3e470a323240b\"" Feb 12 19:12:26.644044 env[1213]: time="2024-02-12T19:12:26.644001605Z" level=info msg="StartContainer for \"b44973c1dceac271093d8eb86e78bf2407b83a0af1c784e743e3e470a323240b\" returns successfully" Feb 12 19:12:27.406401 kubelet[1520]: E0212 19:12:27.406349 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:27.653292 kubelet[1520]: I0212 19:12:27.653247 1520 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=-9.223372031201572e+09 pod.CreationTimestamp="2024-02-12 19:12:22 +0000 UTC" firstStartedPulling="2024-02-12 19:12:22.640956882 +0000 UTC m=+43.125413318" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:12:27.652919595 +0000 UTC m=+48.137376111" watchObservedRunningTime="2024-02-12 19:12:27.653203353 +0000 UTC m=+48.137659789" Feb 12 19:12:28.406795 kubelet[1520]: E0212 19:12:28.406742 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:29.407437 kubelet[1520]: E0212 19:12:29.407369 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:30.407896 kubelet[1520]: E0212 19:12:30.407826 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:31.408477 kubelet[1520]: E0212 19:12:31.408409 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:32.409394 kubelet[1520]: E0212 19:12:32.409301 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:33.409624 kubelet[1520]: E0212 19:12:33.409552 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:34.410564 kubelet[1520]: E0212 19:12:34.410517 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:35.411408 kubelet[1520]: E0212 19:12:35.411346 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:36.133800 kubelet[1520]: I0212 19:12:36.133758 1520 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:12:36.227630 kubelet[1520]: I0212 19:12:36.227600 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsrjm\" (UniqueName: \"kubernetes.io/projected/09df33c0-41f2-41a1-aeb0-ce8c510f34ba-kube-api-access-nsrjm\") pod \"test-pod-1\" (UID: \"09df33c0-41f2-41a1-aeb0-ce8c510f34ba\") " pod="default/test-pod-1" Feb 12 19:12:36.227839 kubelet[1520]: I0212 19:12:36.227826 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-60d6ef9b-94bf-43ca-adba-2a359717da79\" (UniqueName: \"kubernetes.io/nfs/09df33c0-41f2-41a1-aeb0-ce8c510f34ba-pvc-60d6ef9b-94bf-43ca-adba-2a359717da79\") pod \"test-pod-1\" (UID: \"09df33c0-41f2-41a1-aeb0-ce8c510f34ba\") " pod="default/test-pod-1" Feb 12 19:12:36.354366 kernel: FS-Cache: Loaded Feb 12 19:12:36.381369 kernel: RPC: Registered named UNIX socket transport module. Feb 12 19:12:36.381487 kernel: RPC: Registered udp transport module. Feb 12 19:12:36.381515 kernel: RPC: Registered tcp transport module. Feb 12 19:12:36.382343 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 12 19:12:36.411843 kubelet[1520]: E0212 19:12:36.411727 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:36.418375 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 12 19:12:36.558653 kernel: NFS: Registering the id_resolver key type Feb 12 19:12:36.558771 kernel: Key type id_resolver registered Feb 12 19:12:36.558793 kernel: Key type id_legacy registered Feb 12 19:12:36.585717 nfsidmap[2940]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 12 19:12:36.589139 nfsidmap[2943]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 12 19:12:36.737166 env[1213]: time="2024-02-12T19:12:36.737101516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:09df33c0-41f2-41a1-aeb0-ce8c510f34ba,Namespace:default,Attempt:0,}" Feb 12 19:12:36.766509 systemd-networkd[1109]: lxcf7274ecc83a1: Link UP Feb 12 19:12:36.775368 kernel: eth0: renamed from tmp2f1af Feb 12 19:12:36.783347 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:12:36.783437 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf7274ecc83a1: link becomes ready Feb 12 19:12:36.783193 systemd-networkd[1109]: lxcf7274ecc83a1: Gained carrier Feb 12 19:12:36.975820 env[1213]: time="2024-02-12T19:12:36.975746311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:12:36.975820 env[1213]: time="2024-02-12T19:12:36.975790200Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:12:36.976056 env[1213]: time="2024-02-12T19:12:36.975800162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:12:36.976342 env[1213]: time="2024-02-12T19:12:36.976285415Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2f1af26d1d408a58753173b963dcdffcf9ba6348f22e7109f020b5d5c0e2b80e pid=2979 runtime=io.containerd.runc.v2 Feb 12 19:12:37.018861 systemd-resolved[1155]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 19:12:37.035486 env[1213]: time="2024-02-12T19:12:37.035439265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:09df33c0-41f2-41a1-aeb0-ce8c510f34ba,Namespace:default,Attempt:0,} returns sandbox id \"2f1af26d1d408a58753173b963dcdffcf9ba6348f22e7109f020b5d5c0e2b80e\"" Feb 12 19:12:37.036817 env[1213]: time="2024-02-12T19:12:37.036786077Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 19:12:37.322169 env[1213]: time="2024-02-12T19:12:37.321819032Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:12:37.323363 env[1213]: time="2024-02-12T19:12:37.323334675Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:12:37.325014 env[1213]: time="2024-02-12T19:12:37.324977422Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:12:37.328895 env[1213]: time="2024-02-12T19:12:37.328862747Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:12:37.329455 env[1213]: time="2024-02-12T19:12:37.329422692Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\"" Feb 12 19:12:37.331075 env[1213]: time="2024-02-12T19:12:37.331040234Z" level=info msg="CreateContainer within sandbox \"2f1af26d1d408a58753173b963dcdffcf9ba6348f22e7109f020b5d5c0e2b80e\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 12 19:12:37.346094 env[1213]: time="2024-02-12T19:12:37.346039755Z" level=info msg="CreateContainer within sandbox \"2f1af26d1d408a58753173b963dcdffcf9ba6348f22e7109f020b5d5c0e2b80e\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"a3d372d694f64dab81645b87d3379ec982e4e367b14e5fca698dd855a2ba3c08\"" Feb 12 19:12:37.346718 env[1213]: time="2024-02-12T19:12:37.346691277Z" level=info msg="StartContainer for \"a3d372d694f64dab81645b87d3379ec982e4e367b14e5fca698dd855a2ba3c08\"" Feb 12 19:12:37.412410 kubelet[1520]: E0212 19:12:37.412354 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:37.421949 env[1213]: time="2024-02-12T19:12:37.421889082Z" level=info msg="StartContainer for \"a3d372d694f64dab81645b87d3379ec982e4e367b14e5fca698dd855a2ba3c08\" returns successfully" Feb 12 19:12:38.067483 systemd-networkd[1109]: lxcf7274ecc83a1: Gained IPv6LL Feb 12 19:12:38.413423 kubelet[1520]: E0212 19:12:38.413299 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:39.413663 kubelet[1520]: E0212 19:12:39.413620 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:40.373126 kubelet[1520]: E0212 19:12:40.373092 1520 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:40.408185 kubelet[1520]: I0212 19:12:40.408103 1520 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=-9.22337201844671e+09 pod.CreationTimestamp="2024-02-12 19:12:22 +0000 UTC" firstStartedPulling="2024-02-12 19:12:37.036507345 +0000 UTC m=+57.520963781" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:12:37.673173853 +0000 UTC m=+58.157630289" watchObservedRunningTime="2024-02-12 19:12:40.408064446 +0000 UTC m=+60.892520882" Feb 12 19:12:40.414051 kubelet[1520]: E0212 19:12:40.413758 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:40.427941 systemd[1]: run-containerd-runc-k8s.io-e22d24c2ea220d9a9d6d473abf8f409ea332ab0f405126a04cc7cd23cb197084-runc.8st4eY.mount: Deactivated successfully. Feb 12 19:12:40.451543 env[1213]: time="2024-02-12T19:12:40.451288740Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:12:40.456995 env[1213]: time="2024-02-12T19:12:40.456948661Z" level=info msg="StopContainer for \"e22d24c2ea220d9a9d6d473abf8f409ea332ab0f405126a04cc7cd23cb197084\" with timeout 1 (s)" Feb 12 19:12:40.457234 env[1213]: time="2024-02-12T19:12:40.457203784Z" level=info msg="Stop container \"e22d24c2ea220d9a9d6d473abf8f409ea332ab0f405126a04cc7cd23cb197084\" with signal terminated" Feb 12 19:12:40.462593 systemd-networkd[1109]: lxc_health: Link DOWN Feb 12 19:12:40.462600 systemd-networkd[1109]: lxc_health: Lost carrier Feb 12 19:12:40.513791 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e22d24c2ea220d9a9d6d473abf8f409ea332ab0f405126a04cc7cd23cb197084-rootfs.mount: Deactivated successfully. Feb 12 19:12:40.528235 env[1213]: time="2024-02-12T19:12:40.528168786Z" level=info msg="shim disconnected" id=e22d24c2ea220d9a9d6d473abf8f409ea332ab0f405126a04cc7cd23cb197084 Feb 12 19:12:40.528235 env[1213]: time="2024-02-12T19:12:40.528220035Z" level=warning msg="cleaning up after shim disconnected" id=e22d24c2ea220d9a9d6d473abf8f409ea332ab0f405126a04cc7cd23cb197084 namespace=k8s.io Feb 12 19:12:40.528235 env[1213]: time="2024-02-12T19:12:40.528231877Z" level=info msg="cleaning up dead shim" Feb 12 19:12:40.535349 env[1213]: time="2024-02-12T19:12:40.535288874Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:12:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3110 runtime=io.containerd.runc.v2\n" Feb 12 19:12:40.537995 env[1213]: time="2024-02-12T19:12:40.537950286Z" level=info msg="StopContainer for \"e22d24c2ea220d9a9d6d473abf8f409ea332ab0f405126a04cc7cd23cb197084\" returns successfully" Feb 12 19:12:40.538691 env[1213]: time="2024-02-12T19:12:40.538655725Z" level=info msg="StopPodSandbox for \"68ecd8afdc4e45548da96c50eb5b64f9a8fd0198c037d11e620161b4b89ba1ee\"" Feb 12 19:12:40.538860 env[1213]: time="2024-02-12T19:12:40.538835956Z" level=info msg="Container to stop \"d8e90bd4a5c475696482bc5e17c1089837a1ee68e18779cd35213528adc05c13\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:12:40.538946 env[1213]: time="2024-02-12T19:12:40.538928532Z" level=info msg="Container to stop \"af355eeeb8b336912859033a4b27d4921d3097c8d976515576021059ac20c666\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:12:40.539011 env[1213]: time="2024-02-12T19:12:40.538995183Z" level=info msg="Container to stop \"cf429d8a19c2796f60a1011392d851117b320ad680bc705ed8564facfe3596f2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:12:40.539079 env[1213]: time="2024-02-12T19:12:40.539061834Z" level=info msg="Container to stop \"4be20a7bf574c884281c0cbd5cf3fea34e5f8dcfaab5cd14ee3ec84745c87944\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:12:40.539143 env[1213]: time="2024-02-12T19:12:40.539126845Z" level=info msg="Container to stop \"e22d24c2ea220d9a9d6d473abf8f409ea332ab0f405126a04cc7cd23cb197084\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:12:40.540875 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-68ecd8afdc4e45548da96c50eb5b64f9a8fd0198c037d11e620161b4b89ba1ee-shm.mount: Deactivated successfully. Feb 12 19:12:40.573923 env[1213]: time="2024-02-12T19:12:40.573873541Z" level=info msg="shim disconnected" id=68ecd8afdc4e45548da96c50eb5b64f9a8fd0198c037d11e620161b4b89ba1ee Feb 12 19:12:40.573923 env[1213]: time="2024-02-12T19:12:40.573920909Z" level=warning msg="cleaning up after shim disconnected" id=68ecd8afdc4e45548da96c50eb5b64f9a8fd0198c037d11e620161b4b89ba1ee namespace=k8s.io Feb 12 19:12:40.574409 env[1213]: time="2024-02-12T19:12:40.573931871Z" level=info msg="cleaning up dead shim" Feb 12 19:12:40.581405 env[1213]: time="2024-02-12T19:12:40.581356651Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:12:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3145 runtime=io.containerd.runc.v2\n" Feb 12 19:12:40.581705 env[1213]: time="2024-02-12T19:12:40.581680466Z" level=info msg="TearDown network for sandbox \"68ecd8afdc4e45548da96c50eb5b64f9a8fd0198c037d11e620161b4b89ba1ee\" successfully" Feb 12 19:12:40.581765 env[1213]: time="2024-02-12T19:12:40.581705470Z" level=info msg="StopPodSandbox for \"68ecd8afdc4e45548da96c50eb5b64f9a8fd0198c037d11e620161b4b89ba1ee\" returns successfully" Feb 12 19:12:40.669041 kubelet[1520]: I0212 19:12:40.668936 1520 scope.go:115] "RemoveContainer" containerID="e22d24c2ea220d9a9d6d473abf8f409ea332ab0f405126a04cc7cd23cb197084" Feb 12 19:12:40.670560 env[1213]: time="2024-02-12T19:12:40.670516780Z" level=info msg="RemoveContainer for \"e22d24c2ea220d9a9d6d473abf8f409ea332ab0f405126a04cc7cd23cb197084\"" Feb 12 19:12:40.674543 env[1213]: time="2024-02-12T19:12:40.674501216Z" level=info msg="RemoveContainer for \"e22d24c2ea220d9a9d6d473abf8f409ea332ab0f405126a04cc7cd23cb197084\" returns successfully" Feb 12 19:12:40.674931 kubelet[1520]: I0212 19:12:40.674905 1520 scope.go:115] "RemoveContainer" containerID="cf429d8a19c2796f60a1011392d851117b320ad680bc705ed8564facfe3596f2" Feb 12 19:12:40.675920 env[1213]: time="2024-02-12T19:12:40.675894213Z" level=info msg="RemoveContainer for \"cf429d8a19c2796f60a1011392d851117b320ad680bc705ed8564facfe3596f2\"" Feb 12 19:12:40.682546 env[1213]: time="2024-02-12T19:12:40.682502654Z" level=info msg="RemoveContainer for \"cf429d8a19c2796f60a1011392d851117b320ad680bc705ed8564facfe3596f2\" returns successfully" Feb 12 19:12:40.682846 kubelet[1520]: I0212 19:12:40.682731 1520 scope.go:115] "RemoveContainer" containerID="4be20a7bf574c884281c0cbd5cf3fea34e5f8dcfaab5cd14ee3ec84745c87944" Feb 12 19:12:40.685096 env[1213]: time="2024-02-12T19:12:40.685062688Z" level=info msg="RemoveContainer for \"4be20a7bf574c884281c0cbd5cf3fea34e5f8dcfaab5cd14ee3ec84745c87944\"" Feb 12 19:12:40.688711 env[1213]: time="2024-02-12T19:12:40.688670421Z" level=info msg="RemoveContainer for \"4be20a7bf574c884281c0cbd5cf3fea34e5f8dcfaab5cd14ee3ec84745c87944\" returns successfully" Feb 12 19:12:40.689113 kubelet[1520]: I0212 19:12:40.689022 1520 scope.go:115] "RemoveContainer" containerID="af355eeeb8b336912859033a4b27d4921d3097c8d976515576021059ac20c666" Feb 12 19:12:40.690236 env[1213]: time="2024-02-12T19:12:40.690202561Z" level=info msg="RemoveContainer for \"af355eeeb8b336912859033a4b27d4921d3097c8d976515576021059ac20c666\"" Feb 12 19:12:40.692775 env[1213]: time="2024-02-12T19:12:40.692744152Z" level=info msg="RemoveContainer for \"af355eeeb8b336912859033a4b27d4921d3097c8d976515576021059ac20c666\" returns successfully" Feb 12 19:12:40.693166 kubelet[1520]: I0212 19:12:40.693068 1520 scope.go:115] "RemoveContainer" containerID="d8e90bd4a5c475696482bc5e17c1089837a1ee68e18779cd35213528adc05c13" Feb 12 19:12:40.694127 env[1213]: time="2024-02-12T19:12:40.694099142Z" level=info msg="RemoveContainer for \"d8e90bd4a5c475696482bc5e17c1089837a1ee68e18779cd35213528adc05c13\"" Feb 12 19:12:40.696511 env[1213]: time="2024-02-12T19:12:40.696475665Z" level=info msg="RemoveContainer for \"d8e90bd4a5c475696482bc5e17c1089837a1ee68e18779cd35213528adc05c13\" returns successfully" Feb 12 19:12:40.696776 kubelet[1520]: I0212 19:12:40.696746 1520 scope.go:115] "RemoveContainer" containerID="e22d24c2ea220d9a9d6d473abf8f409ea332ab0f405126a04cc7cd23cb197084" Feb 12 19:12:40.697122 env[1213]: time="2024-02-12T19:12:40.697053363Z" level=error msg="ContainerStatus for \"e22d24c2ea220d9a9d6d473abf8f409ea332ab0f405126a04cc7cd23cb197084\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e22d24c2ea220d9a9d6d473abf8f409ea332ab0f405126a04cc7cd23cb197084\": not found" Feb 12 19:12:40.697375 kubelet[1520]: E0212 19:12:40.697359 1520 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e22d24c2ea220d9a9d6d473abf8f409ea332ab0f405126a04cc7cd23cb197084\": not found" containerID="e22d24c2ea220d9a9d6d473abf8f409ea332ab0f405126a04cc7cd23cb197084" Feb 12 19:12:40.697440 kubelet[1520]: I0212 19:12:40.697396 1520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:e22d24c2ea220d9a9d6d473abf8f409ea332ab0f405126a04cc7cd23cb197084} err="failed to get container status \"e22d24c2ea220d9a9d6d473abf8f409ea332ab0f405126a04cc7cd23cb197084\": rpc error: code = NotFound desc = an error occurred when try to find container \"e22d24c2ea220d9a9d6d473abf8f409ea332ab0f405126a04cc7cd23cb197084\": not found" Feb 12 19:12:40.697440 kubelet[1520]: I0212 19:12:40.697407 1520 scope.go:115] "RemoveContainer" containerID="cf429d8a19c2796f60a1011392d851117b320ad680bc705ed8564facfe3596f2" Feb 12 19:12:40.697708 env[1213]: time="2024-02-12T19:12:40.697657786Z" level=error msg="ContainerStatus for \"cf429d8a19c2796f60a1011392d851117b320ad680bc705ed8564facfe3596f2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cf429d8a19c2796f60a1011392d851117b320ad680bc705ed8564facfe3596f2\": not found" Feb 12 19:12:40.697956 kubelet[1520]: E0212 19:12:40.697849 1520 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cf429d8a19c2796f60a1011392d851117b320ad680bc705ed8564facfe3596f2\": not found" containerID="cf429d8a19c2796f60a1011392d851117b320ad680bc705ed8564facfe3596f2" Feb 12 19:12:40.697956 kubelet[1520]: I0212 19:12:40.697881 1520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:cf429d8a19c2796f60a1011392d851117b320ad680bc705ed8564facfe3596f2} err="failed to get container status \"cf429d8a19c2796f60a1011392d851117b320ad680bc705ed8564facfe3596f2\": rpc error: code = NotFound desc = an error occurred when try to find container \"cf429d8a19c2796f60a1011392d851117b320ad680bc705ed8564facfe3596f2\": not found" Feb 12 19:12:40.697956 kubelet[1520]: I0212 19:12:40.697892 1520 scope.go:115] "RemoveContainer" containerID="4be20a7bf574c884281c0cbd5cf3fea34e5f8dcfaab5cd14ee3ec84745c87944" Feb 12 19:12:40.698258 env[1213]: time="2024-02-12T19:12:40.698202918Z" level=error msg="ContainerStatus for \"4be20a7bf574c884281c0cbd5cf3fea34e5f8dcfaab5cd14ee3ec84745c87944\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4be20a7bf574c884281c0cbd5cf3fea34e5f8dcfaab5cd14ee3ec84745c87944\": not found" Feb 12 19:12:40.698507 kubelet[1520]: E0212 19:12:40.698494 1520 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4be20a7bf574c884281c0cbd5cf3fea34e5f8dcfaab5cd14ee3ec84745c87944\": not found" containerID="4be20a7bf574c884281c0cbd5cf3fea34e5f8dcfaab5cd14ee3ec84745c87944" Feb 12 19:12:40.698579 kubelet[1520]: I0212 19:12:40.698517 1520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:4be20a7bf574c884281c0cbd5cf3fea34e5f8dcfaab5cd14ee3ec84745c87944} err="failed to get container status \"4be20a7bf574c884281c0cbd5cf3fea34e5f8dcfaab5cd14ee3ec84745c87944\": rpc error: code = NotFound desc = an error occurred when try to find container \"4be20a7bf574c884281c0cbd5cf3fea34e5f8dcfaab5cd14ee3ec84745c87944\": not found" Feb 12 19:12:40.698579 kubelet[1520]: I0212 19:12:40.698526 1520 scope.go:115] "RemoveContainer" containerID="af355eeeb8b336912859033a4b27d4921d3097c8d976515576021059ac20c666" Feb 12 19:12:40.698878 env[1213]: time="2024-02-12T19:12:40.698829624Z" level=error msg="ContainerStatus for \"af355eeeb8b336912859033a4b27d4921d3097c8d976515576021059ac20c666\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"af355eeeb8b336912859033a4b27d4921d3097c8d976515576021059ac20c666\": not found" Feb 12 19:12:40.699216 kubelet[1520]: E0212 19:12:40.699094 1520 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"af355eeeb8b336912859033a4b27d4921d3097c8d976515576021059ac20c666\": not found" containerID="af355eeeb8b336912859033a4b27d4921d3097c8d976515576021059ac20c666" Feb 12 19:12:40.699216 kubelet[1520]: I0212 19:12:40.699125 1520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:af355eeeb8b336912859033a4b27d4921d3097c8d976515576021059ac20c666} err="failed to get container status \"af355eeeb8b336912859033a4b27d4921d3097c8d976515576021059ac20c666\": rpc error: code = NotFound desc = an error occurred when try to find container \"af355eeeb8b336912859033a4b27d4921d3097c8d976515576021059ac20c666\": not found" Feb 12 19:12:40.699216 kubelet[1520]: I0212 19:12:40.699137 1520 scope.go:115] "RemoveContainer" containerID="d8e90bd4a5c475696482bc5e17c1089837a1ee68e18779cd35213528adc05c13" Feb 12 19:12:40.699529 env[1213]: time="2024-02-12T19:12:40.699480415Z" level=error msg="ContainerStatus for \"d8e90bd4a5c475696482bc5e17c1089837a1ee68e18779cd35213528adc05c13\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d8e90bd4a5c475696482bc5e17c1089837a1ee68e18779cd35213528adc05c13\": not found" Feb 12 19:12:40.699744 kubelet[1520]: E0212 19:12:40.699712 1520 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d8e90bd4a5c475696482bc5e17c1089837a1ee68e18779cd35213528adc05c13\": not found" containerID="d8e90bd4a5c475696482bc5e17c1089837a1ee68e18779cd35213528adc05c13" Feb 12 19:12:40.699744 kubelet[1520]: I0212 19:12:40.699746 1520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:d8e90bd4a5c475696482bc5e17c1089837a1ee68e18779cd35213528adc05c13} err="failed to get container status \"d8e90bd4a5c475696482bc5e17c1089837a1ee68e18779cd35213528adc05c13\": rpc error: code = NotFound desc = an error occurred when try to find container \"d8e90bd4a5c475696482bc5e17c1089837a1ee68e18779cd35213528adc05c13\": not found" Feb 12 19:12:40.754284 kubelet[1520]: I0212 19:12:40.752101 1520 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e1847f80-f1f1-48a2-aa2a-cda00e5f14f2" (UID: "e1847f80-f1f1-48a2-aa2a-cda00e5f14f2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:12:40.754284 kubelet[1520]: I0212 19:12:40.752160 1520 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-etc-cni-netd\") pod \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\" (UID: \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\") " Feb 12 19:12:40.754284 kubelet[1520]: I0212 19:12:40.752206 1520 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-cni-path\") pod \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\" (UID: \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\") " Feb 12 19:12:40.754284 kubelet[1520]: I0212 19:12:40.752226 1520 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-cilium-run\") pod \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\" (UID: \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\") " Feb 12 19:12:40.754284 kubelet[1520]: I0212 19:12:40.752248 1520 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-xtables-lock\") pod \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\" (UID: \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\") " Feb 12 19:12:40.754284 kubelet[1520]: I0212 19:12:40.752267 1520 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-bpf-maps\") pod \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\" (UID: \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\") " Feb 12 19:12:40.754544 kubelet[1520]: I0212 19:12:40.752291 1520 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-clustermesh-secrets\") pod \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\" (UID: \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\") " Feb 12 19:12:40.754544 kubelet[1520]: I0212 19:12:40.752309 1520 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-hostproc\") pod \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\" (UID: \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\") " Feb 12 19:12:40.754544 kubelet[1520]: I0212 19:12:40.752350 1520 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-cilium-cgroup\") pod \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\" (UID: \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\") " Feb 12 19:12:40.754544 kubelet[1520]: I0212 19:12:40.752356 1520 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e1847f80-f1f1-48a2-aa2a-cda00e5f14f2" (UID: "e1847f80-f1f1-48a2-aa2a-cda00e5f14f2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:12:40.754544 kubelet[1520]: I0212 19:12:40.752376 1520 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-cilium-config-path\") pod \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\" (UID: \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\") " Feb 12 19:12:40.754544 kubelet[1520]: I0212 19:12:40.752380 1520 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-cni-path" (OuterVolumeSpecName: "cni-path") pod "e1847f80-f1f1-48a2-aa2a-cda00e5f14f2" (UID: "e1847f80-f1f1-48a2-aa2a-cda00e5f14f2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:12:40.754687 kubelet[1520]: I0212 19:12:40.752396 1520 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-lib-modules\") pod \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\" (UID: \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\") " Feb 12 19:12:40.754687 kubelet[1520]: I0212 19:12:40.752396 1520 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e1847f80-f1f1-48a2-aa2a-cda00e5f14f2" (UID: "e1847f80-f1f1-48a2-aa2a-cda00e5f14f2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:12:40.754687 kubelet[1520]: I0212 19:12:40.752429 1520 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e1847f80-f1f1-48a2-aa2a-cda00e5f14f2" (UID: "e1847f80-f1f1-48a2-aa2a-cda00e5f14f2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:12:40.754687 kubelet[1520]: I0212 19:12:40.752431 1520 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e1847f80-f1f1-48a2-aa2a-cda00e5f14f2" (UID: "e1847f80-f1f1-48a2-aa2a-cda00e5f14f2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:12:40.754687 kubelet[1520]: I0212 19:12:40.752455 1520 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e1847f80-f1f1-48a2-aa2a-cda00e5f14f2" (UID: "e1847f80-f1f1-48a2-aa2a-cda00e5f14f2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:12:40.754795 kubelet[1520]: W0212 19:12:40.752603 1520 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 19:12:40.754795 kubelet[1520]: I0212 19:12:40.752775 1520 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-hostproc" (OuterVolumeSpecName: "hostproc") pod "e1847f80-f1f1-48a2-aa2a-cda00e5f14f2" (UID: "e1847f80-f1f1-48a2-aa2a-cda00e5f14f2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:12:40.754795 kubelet[1520]: I0212 19:12:40.752831 1520 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2fsd\" (UniqueName: \"kubernetes.io/projected/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-kube-api-access-q2fsd\") pod \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\" (UID: \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\") " Feb 12 19:12:40.754795 kubelet[1520]: I0212 19:12:40.752859 1520 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-host-proc-sys-kernel\") pod \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\" (UID: \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\") " Feb 12 19:12:40.754795 kubelet[1520]: I0212 19:12:40.752888 1520 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-host-proc-sys-net\") pod \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\" (UID: \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\") " Feb 12 19:12:40.754795 kubelet[1520]: I0212 19:12:40.752910 1520 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-hubble-tls\") pod \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\" (UID: \"e1847f80-f1f1-48a2-aa2a-cda00e5f14f2\") " Feb 12 19:12:40.754929 kubelet[1520]: I0212 19:12:40.752941 1520 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-etc-cni-netd\") on node \"10.0.0.30\" DevicePath \"\"" Feb 12 19:12:40.754929 kubelet[1520]: I0212 19:12:40.752961 1520 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-cni-path\") on node \"10.0.0.30\" DevicePath \"\"" Feb 12 19:12:40.754929 kubelet[1520]: I0212 19:12:40.752970 1520 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-cilium-run\") on node \"10.0.0.30\" DevicePath \"\"" Feb 12 19:12:40.754929 kubelet[1520]: I0212 19:12:40.752981 1520 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-xtables-lock\") on node \"10.0.0.30\" DevicePath \"\"" Feb 12 19:12:40.754929 kubelet[1520]: I0212 19:12:40.753038 1520 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e1847f80-f1f1-48a2-aa2a-cda00e5f14f2" (UID: "e1847f80-f1f1-48a2-aa2a-cda00e5f14f2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:12:40.754929 kubelet[1520]: I0212 19:12:40.753078 1520 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e1847f80-f1f1-48a2-aa2a-cda00e5f14f2" (UID: "e1847f80-f1f1-48a2-aa2a-cda00e5f14f2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:12:40.755100 kubelet[1520]: I0212 19:12:40.754228 1520 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e1847f80-f1f1-48a2-aa2a-cda00e5f14f2" (UID: "e1847f80-f1f1-48a2-aa2a-cda00e5f14f2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:12:40.755395 kubelet[1520]: I0212 19:12:40.755359 1520 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e1847f80-f1f1-48a2-aa2a-cda00e5f14f2" (UID: "e1847f80-f1f1-48a2-aa2a-cda00e5f14f2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:12:40.757237 kubelet[1520]: I0212 19:12:40.757185 1520 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-kube-api-access-q2fsd" (OuterVolumeSpecName: "kube-api-access-q2fsd") pod "e1847f80-f1f1-48a2-aa2a-cda00e5f14f2" (UID: "e1847f80-f1f1-48a2-aa2a-cda00e5f14f2"). InnerVolumeSpecName "kube-api-access-q2fsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:12:40.757761 kubelet[1520]: I0212 19:12:40.757735 1520 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e1847f80-f1f1-48a2-aa2a-cda00e5f14f2" (UID: "e1847f80-f1f1-48a2-aa2a-cda00e5f14f2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:12:40.853941 kubelet[1520]: I0212 19:12:40.853900 1520 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-host-proc-sys-kernel\") on node \"10.0.0.30\" DevicePath \"\"" Feb 12 19:12:40.853941 kubelet[1520]: I0212 19:12:40.853932 1520 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-host-proc-sys-net\") on node \"10.0.0.30\" DevicePath \"\"" Feb 12 19:12:40.853941 kubelet[1520]: I0212 19:12:40.853942 1520 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-hubble-tls\") on node \"10.0.0.30\" DevicePath \"\"" Feb 12 19:12:40.853941 kubelet[1520]: I0212 19:12:40.853951 1520 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-bpf-maps\") on node \"10.0.0.30\" DevicePath \"\"" Feb 12 19:12:40.854116 kubelet[1520]: I0212 19:12:40.853961 1520 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-clustermesh-secrets\") on node \"10.0.0.30\" DevicePath \"\"" Feb 12 19:12:40.854116 kubelet[1520]: I0212 19:12:40.853969 1520 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-hostproc\") on node \"10.0.0.30\" DevicePath \"\"" Feb 12 19:12:40.854116 kubelet[1520]: I0212 19:12:40.853985 1520 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-cilium-cgroup\") on node \"10.0.0.30\" DevicePath \"\"" Feb 12 19:12:40.854116 kubelet[1520]: I0212 19:12:40.853995 1520 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-lib-modules\") on node \"10.0.0.30\" DevicePath \"\"" Feb 12 19:12:40.854116 kubelet[1520]: I0212 19:12:40.854005 1520 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-q2fsd\" (UniqueName: \"kubernetes.io/projected/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-kube-api-access-q2fsd\") on node \"10.0.0.30\" DevicePath \"\"" Feb 12 19:12:40.854116 kubelet[1520]: I0212 19:12:40.854014 1520 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2-cilium-config-path\") on node \"10.0.0.30\" DevicePath \"\"" Feb 12 19:12:41.414580 kubelet[1520]: E0212 19:12:41.414503 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:41.424435 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68ecd8afdc4e45548da96c50eb5b64f9a8fd0198c037d11e620161b4b89ba1ee-rootfs.mount: Deactivated successfully. Feb 12 19:12:41.424586 systemd[1]: var-lib-kubelet-pods-e1847f80\x2df1f1\x2d48a2\x2daa2a\x2dcda00e5f14f2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq2fsd.mount: Deactivated successfully. Feb 12 19:12:41.424673 systemd[1]: var-lib-kubelet-pods-e1847f80\x2df1f1\x2d48a2\x2daa2a\x2dcda00e5f14f2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 19:12:41.424750 systemd[1]: var-lib-kubelet-pods-e1847f80\x2df1f1\x2d48a2\x2daa2a\x2dcda00e5f14f2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 19:12:42.415829 kubelet[1520]: E0212 19:12:42.414702 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:42.547847 kubelet[1520]: I0212 19:12:42.547793 1520 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=e1847f80-f1f1-48a2-aa2a-cda00e5f14f2 path="/var/lib/kubelet/pods/e1847f80-f1f1-48a2-aa2a-cda00e5f14f2/volumes" Feb 12 19:12:43.308926 kubelet[1520]: I0212 19:12:43.308216 1520 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:12:43.308926 kubelet[1520]: E0212 19:12:43.308270 1520 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e1847f80-f1f1-48a2-aa2a-cda00e5f14f2" containerName="apply-sysctl-overwrites" Feb 12 19:12:43.308926 kubelet[1520]: E0212 19:12:43.308280 1520 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e1847f80-f1f1-48a2-aa2a-cda00e5f14f2" containerName="mount-bpf-fs" Feb 12 19:12:43.308926 kubelet[1520]: E0212 19:12:43.308288 1520 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e1847f80-f1f1-48a2-aa2a-cda00e5f14f2" containerName="clean-cilium-state" Feb 12 19:12:43.308926 kubelet[1520]: E0212 19:12:43.308295 1520 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e1847f80-f1f1-48a2-aa2a-cda00e5f14f2" containerName="cilium-agent" Feb 12 19:12:43.308926 kubelet[1520]: E0212 19:12:43.308302 1520 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e1847f80-f1f1-48a2-aa2a-cda00e5f14f2" containerName="mount-cgroup" Feb 12 19:12:43.308926 kubelet[1520]: I0212 19:12:43.308347 1520 memory_manager.go:346] "RemoveStaleState removing state" podUID="e1847f80-f1f1-48a2-aa2a-cda00e5f14f2" containerName="cilium-agent" Feb 12 19:12:43.312360 kubelet[1520]: I0212 19:12:43.312295 1520 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:12:43.415648 kubelet[1520]: E0212 19:12:43.415569 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:43.473687 kubelet[1520]: I0212 19:12:43.471273 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9926138a-6a5a-4fba-ac16-0efb87a2bb00-lib-modules\") pod \"cilium-mbm8c\" (UID: \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\") " pod="kube-system/cilium-mbm8c" Feb 12 19:12:43.473687 kubelet[1520]: I0212 19:12:43.471341 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9926138a-6a5a-4fba-ac16-0efb87a2bb00-host-proc-sys-net\") pod \"cilium-mbm8c\" (UID: \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\") " pod="kube-system/cilium-mbm8c" Feb 12 19:12:43.473687 kubelet[1520]: I0212 19:12:43.471416 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9926138a-6a5a-4fba-ac16-0efb87a2bb00-hubble-tls\") pod \"cilium-mbm8c\" (UID: \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\") " pod="kube-system/cilium-mbm8c" Feb 12 19:12:43.473687 kubelet[1520]: I0212 19:12:43.471467 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/51a88a46-966a-4b12-99cf-bffc3e15bfb7-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-dbzq2\" (UID: \"51a88a46-966a-4b12-99cf-bffc3e15bfb7\") " pod="kube-system/cilium-operator-f59cbd8c6-dbzq2" Feb 12 19:12:43.473687 kubelet[1520]: I0212 19:12:43.471517 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9926138a-6a5a-4fba-ac16-0efb87a2bb00-cni-path\") pod \"cilium-mbm8c\" (UID: \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\") " pod="kube-system/cilium-mbm8c" Feb 12 19:12:43.474168 kubelet[1520]: I0212 19:12:43.471546 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9926138a-6a5a-4fba-ac16-0efb87a2bb00-etc-cni-netd\") pod \"cilium-mbm8c\" (UID: \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\") " pod="kube-system/cilium-mbm8c" Feb 12 19:12:43.474168 kubelet[1520]: I0212 19:12:43.471570 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9926138a-6a5a-4fba-ac16-0efb87a2bb00-xtables-lock\") pod \"cilium-mbm8c\" (UID: \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\") " pod="kube-system/cilium-mbm8c" Feb 12 19:12:43.474168 kubelet[1520]: I0212 19:12:43.471592 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9926138a-6a5a-4fba-ac16-0efb87a2bb00-clustermesh-secrets\") pod \"cilium-mbm8c\" (UID: \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\") " pod="kube-system/cilium-mbm8c" Feb 12 19:12:43.474168 kubelet[1520]: I0212 19:12:43.471620 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9926138a-6a5a-4fba-ac16-0efb87a2bb00-cilium-ipsec-secrets\") pod \"cilium-mbm8c\" (UID: \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\") " pod="kube-system/cilium-mbm8c" Feb 12 19:12:43.474168 kubelet[1520]: I0212 19:12:43.471651 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p8cz\" (UniqueName: \"kubernetes.io/projected/9926138a-6a5a-4fba-ac16-0efb87a2bb00-kube-api-access-7p8cz\") pod \"cilium-mbm8c\" (UID: \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\") " pod="kube-system/cilium-mbm8c" Feb 12 19:12:43.474168 kubelet[1520]: I0212 19:12:43.471673 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9926138a-6a5a-4fba-ac16-0efb87a2bb00-cilium-run\") pod \"cilium-mbm8c\" (UID: \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\") " pod="kube-system/cilium-mbm8c" Feb 12 19:12:43.474357 kubelet[1520]: I0212 19:12:43.471693 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9926138a-6a5a-4fba-ac16-0efb87a2bb00-bpf-maps\") pod \"cilium-mbm8c\" (UID: \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\") " pod="kube-system/cilium-mbm8c" Feb 12 19:12:43.474357 kubelet[1520]: I0212 19:12:43.471717 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9926138a-6a5a-4fba-ac16-0efb87a2bb00-cilium-cgroup\") pod \"cilium-mbm8c\" (UID: \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\") " pod="kube-system/cilium-mbm8c" Feb 12 19:12:43.474357 kubelet[1520]: I0212 19:12:43.471742 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c9sb\" (UniqueName: \"kubernetes.io/projected/51a88a46-966a-4b12-99cf-bffc3e15bfb7-kube-api-access-2c9sb\") pod \"cilium-operator-f59cbd8c6-dbzq2\" (UID: \"51a88a46-966a-4b12-99cf-bffc3e15bfb7\") " pod="kube-system/cilium-operator-f59cbd8c6-dbzq2" Feb 12 19:12:43.474357 kubelet[1520]: I0212 19:12:43.471762 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9926138a-6a5a-4fba-ac16-0efb87a2bb00-cilium-config-path\") pod \"cilium-mbm8c\" (UID: \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\") " pod="kube-system/cilium-mbm8c" Feb 12 19:12:43.474357 kubelet[1520]: I0212 19:12:43.471783 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9926138a-6a5a-4fba-ac16-0efb87a2bb00-host-proc-sys-kernel\") pod \"cilium-mbm8c\" (UID: \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\") " pod="kube-system/cilium-mbm8c" Feb 12 19:12:43.474486 kubelet[1520]: I0212 19:12:43.471803 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9926138a-6a5a-4fba-ac16-0efb87a2bb00-hostproc\") pod \"cilium-mbm8c\" (UID: \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\") " pod="kube-system/cilium-mbm8c" Feb 12 19:12:43.611570 kubelet[1520]: E0212 19:12:43.611462 1520 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:12:43.612095 env[1213]: time="2024-02-12T19:12:43.611995866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mbm8c,Uid:9926138a-6a5a-4fba-ac16-0efb87a2bb00,Namespace:kube-system,Attempt:0,}" Feb 12 19:12:43.616407 kubelet[1520]: E0212 19:12:43.616380 1520 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:12:43.618960 env[1213]: time="2024-02-12T19:12:43.618907781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-dbzq2,Uid:51a88a46-966a-4b12-99cf-bffc3e15bfb7,Namespace:kube-system,Attempt:0,}" Feb 12 19:12:43.633923 env[1213]: time="2024-02-12T19:12:43.633727008Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:12:43.633923 env[1213]: time="2024-02-12T19:12:43.633768814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:12:43.633923 env[1213]: time="2024-02-12T19:12:43.633779576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:12:43.634114 env[1213]: time="2024-02-12T19:12:43.633998250Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/61d17b12f89def6e414ad54c2cfa93e67eb8ebb9d8db70c9db08a3be3449880b pid=3173 runtime=io.containerd.runc.v2 Feb 12 19:12:43.649265 env[1213]: time="2024-02-12T19:12:43.649160129Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:12:43.649265 env[1213]: time="2024-02-12T19:12:43.649204016Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:12:43.649265 env[1213]: time="2024-02-12T19:12:43.649215018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:12:43.649504 env[1213]: time="2024-02-12T19:12:43.649424490Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bee8f6e7fda2152186f1cdb0fc4e4c6c1408fb4cbb2c1a14e2670a911462225e pid=3194 runtime=io.containerd.runc.v2 Feb 12 19:12:43.724464 env[1213]: time="2024-02-12T19:12:43.724414879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mbm8c,Uid:9926138a-6a5a-4fba-ac16-0efb87a2bb00,Namespace:kube-system,Attempt:0,} returns sandbox id \"61d17b12f89def6e414ad54c2cfa93e67eb8ebb9d8db70c9db08a3be3449880b\"" Feb 12 19:12:43.725776 kubelet[1520]: E0212 19:12:43.725254 1520 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:12:43.727876 env[1213]: time="2024-02-12T19:12:43.727833331Z" level=info msg="CreateContainer within sandbox \"61d17b12f89def6e414ad54c2cfa93e67eb8ebb9d8db70c9db08a3be3449880b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:12:43.732987 env[1213]: time="2024-02-12T19:12:43.732946127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-dbzq2,Uid:51a88a46-966a-4b12-99cf-bffc3e15bfb7,Namespace:kube-system,Attempt:0,} returns sandbox id \"bee8f6e7fda2152186f1cdb0fc4e4c6c1408fb4cbb2c1a14e2670a911462225e\"" Feb 12 19:12:43.733589 kubelet[1520]: E0212 19:12:43.733569 1520 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:12:43.734817 env[1213]: time="2024-02-12T19:12:43.734781773Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 12 19:12:43.745947 env[1213]: time="2024-02-12T19:12:43.745866858Z" level=info msg="CreateContainer within sandbox \"61d17b12f89def6e414ad54c2cfa93e67eb8ebb9d8db70c9db08a3be3449880b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"47e215b90a415071e7b0390b689dd94a3316aec4c1aab18394f60f482a144ee1\"" Feb 12 19:12:43.746514 env[1213]: time="2024-02-12T19:12:43.746485794Z" level=info msg="StartContainer for \"47e215b90a415071e7b0390b689dd94a3316aec4c1aab18394f60f482a144ee1\"" Feb 12 19:12:43.839932 env[1213]: time="2024-02-12T19:12:43.839866925Z" level=info msg="StartContainer for \"47e215b90a415071e7b0390b689dd94a3316aec4c1aab18394f60f482a144ee1\" returns successfully" Feb 12 19:12:43.887931 env[1213]: time="2024-02-12T19:12:43.887789542Z" level=info msg="shim disconnected" id=47e215b90a415071e7b0390b689dd94a3316aec4c1aab18394f60f482a144ee1 Feb 12 19:12:43.887931 env[1213]: time="2024-02-12T19:12:43.887838630Z" level=warning msg="cleaning up after shim disconnected" id=47e215b90a415071e7b0390b689dd94a3316aec4c1aab18394f60f482a144ee1 namespace=k8s.io Feb 12 19:12:43.887931 env[1213]: time="2024-02-12T19:12:43.887849832Z" level=info msg="cleaning up dead shim" Feb 12 19:12:43.895431 env[1213]: time="2024-02-12T19:12:43.895364001Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:12:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3298 runtime=io.containerd.runc.v2\n" Feb 12 19:12:44.416718 kubelet[1520]: E0212 19:12:44.416659 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:44.692082 env[1213]: time="2024-02-12T19:12:44.691974014Z" level=info msg="StopPodSandbox for \"61d17b12f89def6e414ad54c2cfa93e67eb8ebb9d8db70c9db08a3be3449880b\"" Feb 12 19:12:44.692522 env[1213]: time="2024-02-12T19:12:44.692479650Z" level=info msg="Container to stop \"47e215b90a415071e7b0390b689dd94a3316aec4c1aab18394f60f482a144ee1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:12:44.694262 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-61d17b12f89def6e414ad54c2cfa93e67eb8ebb9d8db70c9db08a3be3449880b-shm.mount: Deactivated successfully. Feb 12 19:12:44.718650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3009192570.mount: Deactivated successfully. Feb 12 19:12:44.718791 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61d17b12f89def6e414ad54c2cfa93e67eb8ebb9d8db70c9db08a3be3449880b-rootfs.mount: Deactivated successfully. Feb 12 19:12:44.732985 env[1213]: time="2024-02-12T19:12:44.732937019Z" level=info msg="shim disconnected" id=61d17b12f89def6e414ad54c2cfa93e67eb8ebb9d8db70c9db08a3be3449880b Feb 12 19:12:44.732985 env[1213]: time="2024-02-12T19:12:44.732983226Z" level=warning msg="cleaning up after shim disconnected" id=61d17b12f89def6e414ad54c2cfa93e67eb8ebb9d8db70c9db08a3be3449880b namespace=k8s.io Feb 12 19:12:44.732985 env[1213]: time="2024-02-12T19:12:44.732992148Z" level=info msg="cleaning up dead shim" Feb 12 19:12:44.742466 env[1213]: time="2024-02-12T19:12:44.742418096Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:12:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3331 runtime=io.containerd.runc.v2\n" Feb 12 19:12:44.742958 env[1213]: time="2024-02-12T19:12:44.742925052Z" level=info msg="TearDown network for sandbox \"61d17b12f89def6e414ad54c2cfa93e67eb8ebb9d8db70c9db08a3be3449880b\" successfully" Feb 12 19:12:44.743089 env[1213]: time="2024-02-12T19:12:44.743069474Z" level=info msg="StopPodSandbox for \"61d17b12f89def6e414ad54c2cfa93e67eb8ebb9d8db70c9db08a3be3449880b\" returns successfully" Feb 12 19:12:44.879245 kubelet[1520]: I0212 19:12:44.879193 1520 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9926138a-6a5a-4fba-ac16-0efb87a2bb00-xtables-lock\") pod \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\" (UID: \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\") " Feb 12 19:12:44.879245 kubelet[1520]: I0212 19:12:44.879246 1520 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9926138a-6a5a-4fba-ac16-0efb87a2bb00-clustermesh-secrets\") pod \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\" (UID: \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\") " Feb 12 19:12:44.879670 kubelet[1520]: I0212 19:12:44.879266 1520 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9926138a-6a5a-4fba-ac16-0efb87a2bb00-host-proc-sys-net\") pod \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\" (UID: \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\") " Feb 12 19:12:44.879670 kubelet[1520]: I0212 19:12:44.879288 1520 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9926138a-6a5a-4fba-ac16-0efb87a2bb00-hubble-tls\") pod \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\" (UID: \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\") " Feb 12 19:12:44.879670 kubelet[1520]: I0212 19:12:44.879313 1520 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p8cz\" (UniqueName: \"kubernetes.io/projected/9926138a-6a5a-4fba-ac16-0efb87a2bb00-kube-api-access-7p8cz\") pod \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\" (UID: \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\") " Feb 12 19:12:44.879670 kubelet[1520]: I0212 19:12:44.879348 1520 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9926138a-6a5a-4fba-ac16-0efb87a2bb00-bpf-maps\") pod \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\" (UID: \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\") " Feb 12 19:12:44.879670 kubelet[1520]: I0212 19:12:44.879365 1520 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9926138a-6a5a-4fba-ac16-0efb87a2bb00-lib-modules\") pod \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\" (UID: \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\") " Feb 12 19:12:44.879670 kubelet[1520]: I0212 19:12:44.879386 1520 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9926138a-6a5a-4fba-ac16-0efb87a2bb00-hostproc\") pod \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\" (UID: \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\") " Feb 12 19:12:44.879819 kubelet[1520]: I0212 19:12:44.879403 1520 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9926138a-6a5a-4fba-ac16-0efb87a2bb00-cilium-cgroup\") pod \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\" (UID: \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\") " Feb 12 19:12:44.879819 kubelet[1520]: I0212 19:12:44.879423 1520 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9926138a-6a5a-4fba-ac16-0efb87a2bb00-cilium-config-path\") pod \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\" (UID: \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\") " Feb 12 19:12:44.879819 kubelet[1520]: I0212 19:12:44.879450 1520 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9926138a-6a5a-4fba-ac16-0efb87a2bb00-etc-cni-netd\") pod \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\" (UID: \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\") " Feb 12 19:12:44.879819 kubelet[1520]: I0212 19:12:44.879468 1520 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9926138a-6a5a-4fba-ac16-0efb87a2bb00-cilium-run\") pod \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\" (UID: \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\") " Feb 12 19:12:44.879819 kubelet[1520]: I0212 19:12:44.879485 1520 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9926138a-6a5a-4fba-ac16-0efb87a2bb00-cni-path\") pod \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\" (UID: \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\") " Feb 12 19:12:44.879819 kubelet[1520]: I0212 19:12:44.879508 1520 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9926138a-6a5a-4fba-ac16-0efb87a2bb00-cilium-ipsec-secrets\") pod \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\" (UID: \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\") " Feb 12 19:12:44.879995 kubelet[1520]: I0212 19:12:44.879528 1520 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9926138a-6a5a-4fba-ac16-0efb87a2bb00-host-proc-sys-kernel\") pod \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\" (UID: \"9926138a-6a5a-4fba-ac16-0efb87a2bb00\") " Feb 12 19:12:44.879995 kubelet[1520]: I0212 19:12:44.879591 1520 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9926138a-6a5a-4fba-ac16-0efb87a2bb00-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9926138a-6a5a-4fba-ac16-0efb87a2bb00" (UID: "9926138a-6a5a-4fba-ac16-0efb87a2bb00"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:12:44.879995 kubelet[1520]: I0212 19:12:44.879621 1520 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9926138a-6a5a-4fba-ac16-0efb87a2bb00-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9926138a-6a5a-4fba-ac16-0efb87a2bb00" (UID: "9926138a-6a5a-4fba-ac16-0efb87a2bb00"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:12:44.879995 kubelet[1520]: I0212 19:12:44.879955 1520 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9926138a-6a5a-4fba-ac16-0efb87a2bb00-hostproc" (OuterVolumeSpecName: "hostproc") pod "9926138a-6a5a-4fba-ac16-0efb87a2bb00" (UID: "9926138a-6a5a-4fba-ac16-0efb87a2bb00"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:12:44.879995 kubelet[1520]: I0212 19:12:44.879974 1520 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9926138a-6a5a-4fba-ac16-0efb87a2bb00-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9926138a-6a5a-4fba-ac16-0efb87a2bb00" (UID: "9926138a-6a5a-4fba-ac16-0efb87a2bb00"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:12:44.880112 kubelet[1520]: I0212 19:12:44.879995 1520 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9926138a-6a5a-4fba-ac16-0efb87a2bb00-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9926138a-6a5a-4fba-ac16-0efb87a2bb00" (UID: "9926138a-6a5a-4fba-ac16-0efb87a2bb00"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:12:44.880112 kubelet[1520]: I0212 19:12:44.880003 1520 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9926138a-6a5a-4fba-ac16-0efb87a2bb00-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9926138a-6a5a-4fba-ac16-0efb87a2bb00" (UID: "9926138a-6a5a-4fba-ac16-0efb87a2bb00"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:12:44.880163 kubelet[1520]: W0212 19:12:44.880130 1520 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/9926138a-6a5a-4fba-ac16-0efb87a2bb00/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 19:12:44.880501 kubelet[1520]: I0212 19:12:44.880214 1520 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9926138a-6a5a-4fba-ac16-0efb87a2bb00-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9926138a-6a5a-4fba-ac16-0efb87a2bb00" (UID: "9926138a-6a5a-4fba-ac16-0efb87a2bb00"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:12:44.880501 kubelet[1520]: I0212 19:12:44.880231 1520 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9926138a-6a5a-4fba-ac16-0efb87a2bb00-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9926138a-6a5a-4fba-ac16-0efb87a2bb00" (UID: "9926138a-6a5a-4fba-ac16-0efb87a2bb00"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:12:44.880501 kubelet[1520]: I0212 19:12:44.880253 1520 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9926138a-6a5a-4fba-ac16-0efb87a2bb00-cni-path" (OuterVolumeSpecName: "cni-path") pod "9926138a-6a5a-4fba-ac16-0efb87a2bb00" (UID: "9926138a-6a5a-4fba-ac16-0efb87a2bb00"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:12:44.880501 kubelet[1520]: I0212 19:12:44.880272 1520 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9926138a-6a5a-4fba-ac16-0efb87a2bb00-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9926138a-6a5a-4fba-ac16-0efb87a2bb00" (UID: "9926138a-6a5a-4fba-ac16-0efb87a2bb00"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:12:44.882383 kubelet[1520]: I0212 19:12:44.881950 1520 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9926138a-6a5a-4fba-ac16-0efb87a2bb00-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9926138a-6a5a-4fba-ac16-0efb87a2bb00" (UID: "9926138a-6a5a-4fba-ac16-0efb87a2bb00"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:12:44.882957 kubelet[1520]: I0212 19:12:44.882824 1520 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9926138a-6a5a-4fba-ac16-0efb87a2bb00-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9926138a-6a5a-4fba-ac16-0efb87a2bb00" (UID: "9926138a-6a5a-4fba-ac16-0efb87a2bb00"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:12:44.883219 kubelet[1520]: I0212 19:12:44.883189 1520 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9926138a-6a5a-4fba-ac16-0efb87a2bb00-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9926138a-6a5a-4fba-ac16-0efb87a2bb00" (UID: "9926138a-6a5a-4fba-ac16-0efb87a2bb00"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:12:44.884828 kubelet[1520]: I0212 19:12:44.884789 1520 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9926138a-6a5a-4fba-ac16-0efb87a2bb00-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "9926138a-6a5a-4fba-ac16-0efb87a2bb00" (UID: "9926138a-6a5a-4fba-ac16-0efb87a2bb00"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:12:44.885104 kubelet[1520]: I0212 19:12:44.885069 1520 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9926138a-6a5a-4fba-ac16-0efb87a2bb00-kube-api-access-7p8cz" (OuterVolumeSpecName: "kube-api-access-7p8cz") pod "9926138a-6a5a-4fba-ac16-0efb87a2bb00" (UID: "9926138a-6a5a-4fba-ac16-0efb87a2bb00"). InnerVolumeSpecName "kube-api-access-7p8cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:12:44.981942 kubelet[1520]: I0212 19:12:44.980370 1520 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9926138a-6a5a-4fba-ac16-0efb87a2bb00-host-proc-sys-net\") on node \"10.0.0.30\" DevicePath \"\"" Feb 12 19:12:44.981942 kubelet[1520]: I0212 19:12:44.980404 1520 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9926138a-6a5a-4fba-ac16-0efb87a2bb00-hubble-tls\") on node \"10.0.0.30\" DevicePath \"\"" Feb 12 19:12:44.981942 kubelet[1520]: I0212 19:12:44.980414 1520 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9926138a-6a5a-4fba-ac16-0efb87a2bb00-xtables-lock\") on node \"10.0.0.30\" DevicePath \"\"" Feb 12 19:12:44.981942 kubelet[1520]: I0212 19:12:44.980424 1520 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9926138a-6a5a-4fba-ac16-0efb87a2bb00-clustermesh-secrets\") on node \"10.0.0.30\" DevicePath \"\"" Feb 12 19:12:44.981942 kubelet[1520]: I0212 19:12:44.980436 1520 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9926138a-6a5a-4fba-ac16-0efb87a2bb00-bpf-maps\") on node \"10.0.0.30\" DevicePath \"\"" Feb 12 19:12:44.981942 kubelet[1520]: I0212 19:12:44.980445 1520 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9926138a-6a5a-4fba-ac16-0efb87a2bb00-lib-modules\") on node \"10.0.0.30\" DevicePath \"\"" Feb 12 19:12:44.981942 kubelet[1520]: I0212 19:12:44.980456 1520 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9926138a-6a5a-4fba-ac16-0efb87a2bb00-hostproc\") on node \"10.0.0.30\" DevicePath \"\"" Feb 12 19:12:44.981942 kubelet[1520]: I0212 19:12:44.980468 1520 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-7p8cz\" (UniqueName: \"kubernetes.io/projected/9926138a-6a5a-4fba-ac16-0efb87a2bb00-kube-api-access-7p8cz\") on node \"10.0.0.30\" DevicePath \"\"" Feb 12 19:12:44.982221 kubelet[1520]: I0212 19:12:44.980477 1520 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9926138a-6a5a-4fba-ac16-0efb87a2bb00-etc-cni-netd\") on node \"10.0.0.30\" DevicePath \"\"" Feb 12 19:12:44.982221 kubelet[1520]: I0212 19:12:44.980486 1520 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9926138a-6a5a-4fba-ac16-0efb87a2bb00-cilium-run\") on node \"10.0.0.30\" DevicePath \"\"" Feb 12 19:12:44.982221 kubelet[1520]: I0212 19:12:44.980494 1520 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9926138a-6a5a-4fba-ac16-0efb87a2bb00-cilium-cgroup\") on node \"10.0.0.30\" DevicePath \"\"" Feb 12 19:12:44.982221 kubelet[1520]: I0212 19:12:44.980504 1520 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9926138a-6a5a-4fba-ac16-0efb87a2bb00-cilium-config-path\") on node \"10.0.0.30\" DevicePath \"\"" Feb 12 19:12:44.982221 kubelet[1520]: I0212 19:12:44.980514 1520 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9926138a-6a5a-4fba-ac16-0efb87a2bb00-cni-path\") on node \"10.0.0.30\" DevicePath \"\"" Feb 12 19:12:44.982221 kubelet[1520]: I0212 19:12:44.980523 1520 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9926138a-6a5a-4fba-ac16-0efb87a2bb00-cilium-ipsec-secrets\") on node \"10.0.0.30\" DevicePath \"\"" Feb 12 19:12:44.982221 kubelet[1520]: I0212 19:12:44.980536 1520 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9926138a-6a5a-4fba-ac16-0efb87a2bb00-host-proc-sys-kernel\") on node \"10.0.0.30\" DevicePath \"\"" Feb 12 19:12:45.292265 env[1213]: time="2024-02-12T19:12:45.291537518Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:12:45.296227 env[1213]: time="2024-02-12T19:12:45.296169042Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:12:45.304152 env[1213]: time="2024-02-12T19:12:45.304109054Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:12:45.304631 env[1213]: time="2024-02-12T19:12:45.304599487Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 12 19:12:45.306438 env[1213]: time="2024-02-12T19:12:45.306403113Z" level=info msg="CreateContainer within sandbox \"bee8f6e7fda2152186f1cdb0fc4e4c6c1408fb4cbb2c1a14e2670a911462225e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 12 19:12:45.319289 env[1213]: time="2024-02-12T19:12:45.319233167Z" level=info msg="CreateContainer within sandbox \"bee8f6e7fda2152186f1cdb0fc4e4c6c1408fb4cbb2c1a14e2670a911462225e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"760d1d47eb2d463ac5778acfe93c83b09bd3fbc53885225ea84ae11b88cc0ae2\"" Feb 12 19:12:45.320108 env[1213]: time="2024-02-12T19:12:45.320077852Z" level=info msg="StartContainer for \"760d1d47eb2d463ac5778acfe93c83b09bd3fbc53885225ea84ae11b88cc0ae2\"" Feb 12 19:12:45.416847 kubelet[1520]: E0212 19:12:45.416793 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:45.444529 kubelet[1520]: E0212 19:12:45.444442 1520 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 19:12:45.449904 env[1213]: time="2024-02-12T19:12:45.449840009Z" level=info msg="StartContainer for \"760d1d47eb2d463ac5778acfe93c83b09bd3fbc53885225ea84ae11b88cc0ae2\" returns successfully" Feb 12 19:12:45.578262 systemd[1]: var-lib-kubelet-pods-9926138a\x2d6a5a\x2d4fba\x2dac16\x2d0efb87a2bb00-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7p8cz.mount: Deactivated successfully. Feb 12 19:12:45.578426 systemd[1]: var-lib-kubelet-pods-9926138a\x2d6a5a\x2d4fba\x2dac16\x2d0efb87a2bb00-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 12 19:12:45.578513 systemd[1]: var-lib-kubelet-pods-9926138a\x2d6a5a\x2d4fba\x2dac16\x2d0efb87a2bb00-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 19:12:45.578600 systemd[1]: var-lib-kubelet-pods-9926138a\x2d6a5a\x2d4fba\x2dac16\x2d0efb87a2bb00-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 19:12:45.694555 kubelet[1520]: E0212 19:12:45.694517 1520 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:12:45.696020 kubelet[1520]: I0212 19:12:45.695918 1520 scope.go:115] "RemoveContainer" containerID="47e215b90a415071e7b0390b689dd94a3316aec4c1aab18394f60f482a144ee1" Feb 12 19:12:45.697216 env[1213]: time="2024-02-12T19:12:45.697181964Z" level=info msg="RemoveContainer for \"47e215b90a415071e7b0390b689dd94a3316aec4c1aab18394f60f482a144ee1\"" Feb 12 19:12:45.706606 env[1213]: time="2024-02-12T19:12:45.706544907Z" level=info msg="RemoveContainer for \"47e215b90a415071e7b0390b689dd94a3316aec4c1aab18394f60f482a144ee1\" returns successfully" Feb 12 19:12:45.707419 kubelet[1520]: I0212 19:12:45.707387 1520 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-dbzq2" podStartSLOduration=-9.223372034147425e+09 pod.CreationTimestamp="2024-02-12 19:12:43 +0000 UTC" firstStartedPulling="2024-02-12 19:12:43.734219805 +0000 UTC m=+64.218676201" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:12:45.707087627 +0000 UTC m=+66.191544063" watchObservedRunningTime="2024-02-12 19:12:45.707350385 +0000 UTC m=+66.191806821" Feb 12 19:12:45.753379 kubelet[1520]: I0212 19:12:45.753295 1520 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:12:45.753532 kubelet[1520]: E0212 19:12:45.753393 1520 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9926138a-6a5a-4fba-ac16-0efb87a2bb00" containerName="mount-cgroup" Feb 12 19:12:45.753532 kubelet[1520]: I0212 19:12:45.753417 1520 memory_manager.go:346] "RemoveStaleState removing state" podUID="9926138a-6a5a-4fba-ac16-0efb87a2bb00" containerName="mount-cgroup" Feb 12 19:12:45.883724 kubelet[1520]: I0212 19:12:45.883595 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/100b4a15-939d-4099-875f-fe3beb96de7c-bpf-maps\") pod \"cilium-t5lfq\" (UID: \"100b4a15-939d-4099-875f-fe3beb96de7c\") " pod="kube-system/cilium-t5lfq" Feb 12 19:12:45.883724 kubelet[1520]: I0212 19:12:45.883642 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/100b4a15-939d-4099-875f-fe3beb96de7c-etc-cni-netd\") pod \"cilium-t5lfq\" (UID: \"100b4a15-939d-4099-875f-fe3beb96de7c\") " pod="kube-system/cilium-t5lfq" Feb 12 19:12:45.883724 kubelet[1520]: I0212 19:12:45.883667 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/100b4a15-939d-4099-875f-fe3beb96de7c-xtables-lock\") pod \"cilium-t5lfq\" (UID: \"100b4a15-939d-4099-875f-fe3beb96de7c\") " pod="kube-system/cilium-t5lfq" Feb 12 19:12:45.883724 kubelet[1520]: I0212 19:12:45.883691 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/100b4a15-939d-4099-875f-fe3beb96de7c-host-proc-sys-net\") pod \"cilium-t5lfq\" (UID: \"100b4a15-939d-4099-875f-fe3beb96de7c\") " pod="kube-system/cilium-t5lfq" Feb 12 19:12:45.884260 kubelet[1520]: I0212 19:12:45.884220 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/100b4a15-939d-4099-875f-fe3beb96de7c-hostproc\") pod \"cilium-t5lfq\" (UID: \"100b4a15-939d-4099-875f-fe3beb96de7c\") " pod="kube-system/cilium-t5lfq" Feb 12 19:12:45.884304 kubelet[1520]: I0212 19:12:45.884286 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/100b4a15-939d-4099-875f-fe3beb96de7c-clustermesh-secrets\") pod \"cilium-t5lfq\" (UID: \"100b4a15-939d-4099-875f-fe3beb96de7c\") " pod="kube-system/cilium-t5lfq" Feb 12 19:12:45.884369 kubelet[1520]: I0212 19:12:45.884359 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/100b4a15-939d-4099-875f-fe3beb96de7c-host-proc-sys-kernel\") pod \"cilium-t5lfq\" (UID: \"100b4a15-939d-4099-875f-fe3beb96de7c\") " pod="kube-system/cilium-t5lfq" Feb 12 19:12:45.884424 kubelet[1520]: I0212 19:12:45.884412 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm8db\" (UniqueName: \"kubernetes.io/projected/100b4a15-939d-4099-875f-fe3beb96de7c-kube-api-access-bm8db\") pod \"cilium-t5lfq\" (UID: \"100b4a15-939d-4099-875f-fe3beb96de7c\") " pod="kube-system/cilium-t5lfq" Feb 12 19:12:45.884457 kubelet[1520]: I0212 19:12:45.884450 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/100b4a15-939d-4099-875f-fe3beb96de7c-cilium-cgroup\") pod \"cilium-t5lfq\" (UID: \"100b4a15-939d-4099-875f-fe3beb96de7c\") " pod="kube-system/cilium-t5lfq" Feb 12 19:12:45.884494 kubelet[1520]: I0212 19:12:45.884484 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/100b4a15-939d-4099-875f-fe3beb96de7c-cni-path\") pod \"cilium-t5lfq\" (UID: \"100b4a15-939d-4099-875f-fe3beb96de7c\") " pod="kube-system/cilium-t5lfq" Feb 12 19:12:45.884782 kubelet[1520]: I0212 19:12:45.884748 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/100b4a15-939d-4099-875f-fe3beb96de7c-cilium-ipsec-secrets\") pod \"cilium-t5lfq\" (UID: \"100b4a15-939d-4099-875f-fe3beb96de7c\") " pod="kube-system/cilium-t5lfq" Feb 12 19:12:45.884824 kubelet[1520]: I0212 19:12:45.884795 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/100b4a15-939d-4099-875f-fe3beb96de7c-hubble-tls\") pod \"cilium-t5lfq\" (UID: \"100b4a15-939d-4099-875f-fe3beb96de7c\") " pod="kube-system/cilium-t5lfq" Feb 12 19:12:45.884898 kubelet[1520]: I0212 19:12:45.884832 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/100b4a15-939d-4099-875f-fe3beb96de7c-cilium-run\") pod \"cilium-t5lfq\" (UID: \"100b4a15-939d-4099-875f-fe3beb96de7c\") " pod="kube-system/cilium-t5lfq" Feb 12 19:12:45.884898 kubelet[1520]: I0212 19:12:45.884857 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/100b4a15-939d-4099-875f-fe3beb96de7c-lib-modules\") pod \"cilium-t5lfq\" (UID: \"100b4a15-939d-4099-875f-fe3beb96de7c\") " pod="kube-system/cilium-t5lfq" Feb 12 19:12:45.884898 kubelet[1520]: I0212 19:12:45.884879 1520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/100b4a15-939d-4099-875f-fe3beb96de7c-cilium-config-path\") pod \"cilium-t5lfq\" (UID: \"100b4a15-939d-4099-875f-fe3beb96de7c\") " pod="kube-system/cilium-t5lfq" Feb 12 19:12:46.056780 kubelet[1520]: E0212 19:12:46.056739 1520 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:12:46.057272 env[1213]: time="2024-02-12T19:12:46.057233637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t5lfq,Uid:100b4a15-939d-4099-875f-fe3beb96de7c,Namespace:kube-system,Attempt:0,}" Feb 12 19:12:46.069826 env[1213]: time="2024-02-12T19:12:46.069746439Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:12:46.069963 env[1213]: time="2024-02-12T19:12:46.069800007Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:12:46.069963 env[1213]: time="2024-02-12T19:12:46.069810768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:12:46.070043 env[1213]: time="2024-02-12T19:12:46.069963070Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/18971f3f7f5ed1026172a24f524e04cab8ce0cb2822f89c5cc564a6e05d9e4cf pid=3398 runtime=io.containerd.runc.v2 Feb 12 19:12:46.116935 env[1213]: time="2024-02-12T19:12:46.116874266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t5lfq,Uid:100b4a15-939d-4099-875f-fe3beb96de7c,Namespace:kube-system,Attempt:0,} returns sandbox id \"18971f3f7f5ed1026172a24f524e04cab8ce0cb2822f89c5cc564a6e05d9e4cf\"" Feb 12 19:12:46.117914 kubelet[1520]: E0212 19:12:46.117727 1520 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:12:46.119373 env[1213]: time="2024-02-12T19:12:46.119341301Z" level=info msg="CreateContainer within sandbox \"18971f3f7f5ed1026172a24f524e04cab8ce0cb2822f89c5cc564a6e05d9e4cf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:12:46.132420 env[1213]: time="2024-02-12T19:12:46.132362496Z" level=info msg="CreateContainer within sandbox \"18971f3f7f5ed1026172a24f524e04cab8ce0cb2822f89c5cc564a6e05d9e4cf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"95e20a68587819a8c6db4e9addfd93eec83fae10483424286fe50fc0c024aeb9\"" Feb 12 19:12:46.132844 env[1213]: time="2024-02-12T19:12:46.132813761Z" level=info msg="StartContainer for \"95e20a68587819a8c6db4e9addfd93eec83fae10483424286fe50fc0c024aeb9\"" Feb 12 19:12:46.191970 env[1213]: time="2024-02-12T19:12:46.188534626Z" level=info msg="StartContainer for \"95e20a68587819a8c6db4e9addfd93eec83fae10483424286fe50fc0c024aeb9\" returns successfully" Feb 12 19:12:46.217790 env[1213]: time="2024-02-12T19:12:46.217742672Z" level=info msg="shim disconnected" id=95e20a68587819a8c6db4e9addfd93eec83fae10483424286fe50fc0c024aeb9 Feb 12 19:12:46.217790 env[1213]: time="2024-02-12T19:12:46.217790399Z" level=warning msg="cleaning up after shim disconnected" id=95e20a68587819a8c6db4e9addfd93eec83fae10483424286fe50fc0c024aeb9 namespace=k8s.io Feb 12 19:12:46.217790 env[1213]: time="2024-02-12T19:12:46.217800080Z" level=info msg="cleaning up dead shim" Feb 12 19:12:46.227686 env[1213]: time="2024-02-12T19:12:46.227633736Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:12:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3482 runtime=io.containerd.runc.v2\n" Feb 12 19:12:46.417165 kubelet[1520]: E0212 19:12:46.417124 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:46.546152 env[1213]: time="2024-02-12T19:12:46.546036790Z" level=info msg="StopPodSandbox for \"61d17b12f89def6e414ad54c2cfa93e67eb8ebb9d8db70c9db08a3be3449880b\"" Feb 12 19:12:46.546429 env[1213]: time="2024-02-12T19:12:46.546376839Z" level=info msg="TearDown network for sandbox \"61d17b12f89def6e414ad54c2cfa93e67eb8ebb9d8db70c9db08a3be3449880b\" successfully" Feb 12 19:12:46.546429 env[1213]: time="2024-02-12T19:12:46.546426206Z" level=info msg="StopPodSandbox for \"61d17b12f89def6e414ad54c2cfa93e67eb8ebb9d8db70c9db08a3be3449880b\" returns successfully" Feb 12 19:12:46.547613 kubelet[1520]: I0212 19:12:46.547381 1520 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=9926138a-6a5a-4fba-ac16-0efb87a2bb00 path="/var/lib/kubelet/pods/9926138a-6a5a-4fba-ac16-0efb87a2bb00/volumes" Feb 12 19:12:46.700156 kubelet[1520]: E0212 19:12:46.700125 1520 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:12:46.700752 kubelet[1520]: E0212 19:12:46.700720 1520 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:12:46.702478 env[1213]: time="2024-02-12T19:12:46.702435314Z" level=info msg="CreateContainer within sandbox \"18971f3f7f5ed1026172a24f524e04cab8ce0cb2822f89c5cc564a6e05d9e4cf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 19:12:46.714751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1712601662.mount: Deactivated successfully. Feb 12 19:12:46.719353 env[1213]: time="2024-02-12T19:12:46.719280059Z" level=info msg="CreateContainer within sandbox \"18971f3f7f5ed1026172a24f524e04cab8ce0cb2822f89c5cc564a6e05d9e4cf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c1233ea29ec582a32377c8e086a20041de979e25fb498c407e9e53196d272d80\"" Feb 12 19:12:46.719778 env[1213]: time="2024-02-12T19:12:46.719740966Z" level=info msg="StartContainer for \"c1233ea29ec582a32377c8e086a20041de979e25fb498c407e9e53196d272d80\"" Feb 12 19:12:46.773010 env[1213]: time="2024-02-12T19:12:46.772239766Z" level=info msg="StartContainer for \"c1233ea29ec582a32377c8e086a20041de979e25fb498c407e9e53196d272d80\" returns successfully" Feb 12 19:12:46.799292 env[1213]: time="2024-02-12T19:12:46.799185447Z" level=info msg="shim disconnected" id=c1233ea29ec582a32377c8e086a20041de979e25fb498c407e9e53196d272d80 Feb 12 19:12:46.799292 env[1213]: time="2024-02-12T19:12:46.799233934Z" level=warning msg="cleaning up after shim disconnected" id=c1233ea29ec582a32377c8e086a20041de979e25fb498c407e9e53196d272d80 namespace=k8s.io Feb 12 19:12:46.799292 env[1213]: time="2024-02-12T19:12:46.799244655Z" level=info msg="cleaning up dead shim" Feb 12 19:12:46.805537 env[1213]: time="2024-02-12T19:12:46.805496756Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:12:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3544 runtime=io.containerd.runc.v2\n" Feb 12 19:12:47.417948 kubelet[1520]: E0212 19:12:47.417904 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:47.577797 systemd[1]: run-containerd-runc-k8s.io-c1233ea29ec582a32377c8e086a20041de979e25fb498c407e9e53196d272d80-runc.C7V5Qu.mount: Deactivated successfully. Feb 12 19:12:47.577952 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1233ea29ec582a32377c8e086a20041de979e25fb498c407e9e53196d272d80-rootfs.mount: Deactivated successfully. Feb 12 19:12:47.703882 kubelet[1520]: E0212 19:12:47.703428 1520 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:12:47.705744 env[1213]: time="2024-02-12T19:12:47.705693307Z" level=info msg="CreateContainer within sandbox \"18971f3f7f5ed1026172a24f524e04cab8ce0cb2822f89c5cc564a6e05d9e4cf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 19:12:47.722147 env[1213]: time="2024-02-12T19:12:47.722065489Z" level=info msg="CreateContainer within sandbox \"18971f3f7f5ed1026172a24f524e04cab8ce0cb2822f89c5cc564a6e05d9e4cf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9c5813e0307b26e294c0e7c7afc9fb51edb6292f982927ed9e282a4a54b143b2\"" Feb 12 19:12:47.722989 env[1213]: time="2024-02-12T19:12:47.722957935Z" level=info msg="StartContainer for \"9c5813e0307b26e294c0e7c7afc9fb51edb6292f982927ed9e282a4a54b143b2\"" Feb 12 19:12:47.777309 env[1213]: time="2024-02-12T19:12:47.777154676Z" level=info msg="StartContainer for \"9c5813e0307b26e294c0e7c7afc9fb51edb6292f982927ed9e282a4a54b143b2\" returns successfully" Feb 12 19:12:47.801576 env[1213]: time="2024-02-12T19:12:47.801503260Z" level=info msg="shim disconnected" id=9c5813e0307b26e294c0e7c7afc9fb51edb6292f982927ed9e282a4a54b143b2 Feb 12 19:12:47.801576 env[1213]: time="2024-02-12T19:12:47.801567109Z" level=warning msg="cleaning up after shim disconnected" id=9c5813e0307b26e294c0e7c7afc9fb51edb6292f982927ed9e282a4a54b143b2 namespace=k8s.io Feb 12 19:12:47.801576 env[1213]: time="2024-02-12T19:12:47.801579070Z" level=info msg="cleaning up dead shim" Feb 12 19:12:47.809750 env[1213]: time="2024-02-12T19:12:47.808811127Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:12:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3601 runtime=io.containerd.runc.v2\n" Feb 12 19:12:48.418184 kubelet[1520]: E0212 19:12:48.418133 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:48.577794 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c5813e0307b26e294c0e7c7afc9fb51edb6292f982927ed9e282a4a54b143b2-rootfs.mount: Deactivated successfully. Feb 12 19:12:48.706535 kubelet[1520]: E0212 19:12:48.706442 1520 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:12:48.708462 env[1213]: time="2024-02-12T19:12:48.708427302Z" level=info msg="CreateContainer within sandbox \"18971f3f7f5ed1026172a24f524e04cab8ce0cb2822f89c5cc564a6e05d9e4cf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 19:12:48.718966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2183269255.mount: Deactivated successfully. Feb 12 19:12:48.722928 env[1213]: time="2024-02-12T19:12:48.722870927Z" level=info msg="CreateContainer within sandbox \"18971f3f7f5ed1026172a24f524e04cab8ce0cb2822f89c5cc564a6e05d9e4cf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ca8ae8050d5f127cf12a807170d0056b060cd09fecf88d8404d0245807024ff6\"" Feb 12 19:12:48.723554 env[1213]: time="2024-02-12T19:12:48.723520256Z" level=info msg="StartContainer for \"ca8ae8050d5f127cf12a807170d0056b060cd09fecf88d8404d0245807024ff6\"" Feb 12 19:12:48.774300 env[1213]: time="2024-02-12T19:12:48.774255469Z" level=info msg="StartContainer for \"ca8ae8050d5f127cf12a807170d0056b060cd09fecf88d8404d0245807024ff6\" returns successfully" Feb 12 19:12:48.800228 env[1213]: time="2024-02-12T19:12:48.800180552Z" level=info msg="shim disconnected" id=ca8ae8050d5f127cf12a807170d0056b060cd09fecf88d8404d0245807024ff6 Feb 12 19:12:48.800228 env[1213]: time="2024-02-12T19:12:48.800227039Z" level=warning msg="cleaning up after shim disconnected" id=ca8ae8050d5f127cf12a807170d0056b060cd09fecf88d8404d0245807024ff6 namespace=k8s.io Feb 12 19:12:48.800228 env[1213]: time="2024-02-12T19:12:48.800236360Z" level=info msg="cleaning up dead shim" Feb 12 19:12:48.808474 env[1213]: time="2024-02-12T19:12:48.808427926Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:12:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3655 runtime=io.containerd.runc.v2\n" Feb 12 19:12:49.418290 kubelet[1520]: E0212 19:12:49.418248 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:49.578008 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca8ae8050d5f127cf12a807170d0056b060cd09fecf88d8404d0245807024ff6-rootfs.mount: Deactivated successfully. Feb 12 19:12:49.710965 kubelet[1520]: E0212 19:12:49.710241 1520 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:12:49.713030 env[1213]: time="2024-02-12T19:12:49.712963845Z" level=info msg="CreateContainer within sandbox \"18971f3f7f5ed1026172a24f524e04cab8ce0cb2822f89c5cc564a6e05d9e4cf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 19:12:49.725806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2556036126.mount: Deactivated successfully. Feb 12 19:12:49.729821 env[1213]: time="2024-02-12T19:12:49.729734140Z" level=info msg="CreateContainer within sandbox \"18971f3f7f5ed1026172a24f524e04cab8ce0cb2822f89c5cc564a6e05d9e4cf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"af680b209142ddb0a15a5623e61cad4977cdad440017d532391b7d693f164035\"" Feb 12 19:12:49.730385 env[1213]: time="2024-02-12T19:12:49.730308857Z" level=info msg="StartContainer for \"af680b209142ddb0a15a5623e61cad4977cdad440017d532391b7d693f164035\"" Feb 12 19:12:49.781223 env[1213]: time="2024-02-12T19:12:49.781163015Z" level=info msg="StartContainer for \"af680b209142ddb0a15a5623e61cad4977cdad440017d532391b7d693f164035\" returns successfully" Feb 12 19:12:50.035439 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 12 19:12:50.418405 kubelet[1520]: E0212 19:12:50.418362 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:50.714524 kubelet[1520]: E0212 19:12:50.714425 1520 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:12:50.728218 kubelet[1520]: I0212 19:12:50.728177 1520 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-t5lfq" podStartSLOduration=5.7281425089999995 pod.CreationTimestamp="2024-02-12 19:12:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:12:50.727750537 +0000 UTC m=+71.212207013" watchObservedRunningTime="2024-02-12 19:12:50.728142509 +0000 UTC m=+71.212598945" Feb 12 19:12:51.419259 kubelet[1520]: E0212 19:12:51.419201 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:51.716570 kubelet[1520]: E0212 19:12:51.716455 1520 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:12:51.877957 systemd[1]: run-containerd-runc-k8s.io-af680b209142ddb0a15a5623e61cad4977cdad440017d532391b7d693f164035-runc.nR4E5A.mount: Deactivated successfully. Feb 12 19:12:52.420137 kubelet[1520]: E0212 19:12:52.420090 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:52.718975 kubelet[1520]: E0212 19:12:52.718406 1520 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:12:52.743844 systemd-networkd[1109]: lxc_health: Link UP Feb 12 19:12:52.751352 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 19:12:52.753464 systemd-networkd[1109]: lxc_health: Gained carrier Feb 12 19:12:53.421150 kubelet[1520]: E0212 19:12:53.421116 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:53.875467 systemd-networkd[1109]: lxc_health: Gained IPv6LL Feb 12 19:12:54.046549 systemd[1]: run-containerd-runc-k8s.io-af680b209142ddb0a15a5623e61cad4977cdad440017d532391b7d693f164035-runc.GdIRGB.mount: Deactivated successfully. Feb 12 19:12:54.059338 kubelet[1520]: E0212 19:12:54.059277 1520 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:12:54.422304 kubelet[1520]: E0212 19:12:54.422261 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:54.721433 kubelet[1520]: E0212 19:12:54.721102 1520 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:12:55.422641 kubelet[1520]: E0212 19:12:55.422563 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:55.723157 kubelet[1520]: E0212 19:12:55.723061 1520 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:12:56.423705 kubelet[1520]: E0212 19:12:56.423656 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:57.424258 kubelet[1520]: E0212 19:12:57.424215 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:58.352070 systemd[1]: run-containerd-runc-k8s.io-af680b209142ddb0a15a5623e61cad4977cdad440017d532391b7d693f164035-runc.gJA1Xu.mount: Deactivated successfully. Feb 12 19:12:58.424411 kubelet[1520]: E0212 19:12:58.424343 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:12:59.424910 kubelet[1520]: E0212 19:12:59.424861 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:13:00.372855 kubelet[1520]: E0212 19:13:00.372820 1520 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:13:00.425397 kubelet[1520]: E0212 19:13:00.425361 1520 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"