Sep  4 17:10:34.902255 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1]
Sep  4 17:10:34.902276 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Wed Sep  4 15:52:28 -00 2024
Sep  4 17:10:34.902285 kernel: KASLR enabled
Sep  4 17:10:34.902291 kernel: efi: EFI v2.7 by EDK II
Sep  4 17:10:34.902297 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb900018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 
Sep  4 17:10:34.902303 kernel: random: crng init done
Sep  4 17:10:34.902378 kernel: ACPI: Early table checksum verification disabled
Sep  4 17:10:34.902385 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS )
Sep  4 17:10:34.902391 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS  BXPC     00000001      01000013)
Sep  4 17:10:34.902400 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS  BXPC     00000001 BXPC 00000001)
Sep  4 17:10:34.902406 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Sep  4 17:10:34.902412 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS  BXPC     00000001 BXPC 00000001)
Sep  4 17:10:34.902418 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Sep  4 17:10:34.902425 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Sep  4 17:10:34.902432 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Sep  4 17:10:34.902439 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Sep  4 17:10:34.902446 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS  BXPC     00000001 BXPC 00000001)
Sep  4 17:10:34.902453 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS  BXPC     00000001 BXPC 00000001)
Sep  4 17:10:34.902459 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600
Sep  4 17:10:34.902465 kernel: NUMA: Failed to initialise from firmware
Sep  4 17:10:34.902472 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff]
Sep  4 17:10:34.902478 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff]
Sep  4 17:10:34.902492 kernel: Zone ranges:
Sep  4 17:10:34.902499 kernel:   DMA      [mem 0x0000000040000000-0x00000000dcffffff]
Sep  4 17:10:34.902505 kernel:   DMA32    empty
Sep  4 17:10:34.902513 kernel:   Normal   empty
Sep  4 17:10:34.902520 kernel: Movable zone start for each node
Sep  4 17:10:34.902526 kernel: Early memory node ranges
Sep  4 17:10:34.902532 kernel:   node   0: [mem 0x0000000040000000-0x00000000d976ffff]
Sep  4 17:10:34.902538 kernel:   node   0: [mem 0x00000000d9770000-0x00000000d9b3ffff]
Sep  4 17:10:34.902545 kernel:   node   0: [mem 0x00000000d9b40000-0x00000000dce1ffff]
Sep  4 17:10:34.902551 kernel:   node   0: [mem 0x00000000dce20000-0x00000000dceaffff]
Sep  4 17:10:34.902557 kernel:   node   0: [mem 0x00000000dceb0000-0x00000000dcebffff]
Sep  4 17:10:34.902564 kernel:   node   0: [mem 0x00000000dcec0000-0x00000000dcfdffff]
Sep  4 17:10:34.902570 kernel:   node   0: [mem 0x00000000dcfe0000-0x00000000dcffffff]
Sep  4 17:10:34.902577 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff]
Sep  4 17:10:34.902583 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges
Sep  4 17:10:34.902591 kernel: psci: probing for conduit method from ACPI.
Sep  4 17:10:34.902597 kernel: psci: PSCIv1.1 detected in firmware.
Sep  4 17:10:34.902603 kernel: psci: Using standard PSCI v0.2 function IDs
Sep  4 17:10:34.902612 kernel: psci: Trusted OS migration not required
Sep  4 17:10:34.902619 kernel: psci: SMC Calling Convention v1.1
Sep  4 17:10:34.902626 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003)
Sep  4 17:10:34.902634 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976
Sep  4 17:10:34.902641 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096
Sep  4 17:10:34.902648 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 
Sep  4 17:10:34.902655 kernel: Detected PIPT I-cache on CPU0
Sep  4 17:10:34.902661 kernel: CPU features: detected: GIC system register CPU interface
Sep  4 17:10:34.902668 kernel: CPU features: detected: Hardware dirty bit management
Sep  4 17:10:34.902675 kernel: CPU features: detected: Spectre-v4
Sep  4 17:10:34.902681 kernel: CPU features: detected: Spectre-BHB
Sep  4 17:10:34.902688 kernel: CPU features: kernel page table isolation forced ON by KASLR
Sep  4 17:10:34.902695 kernel: CPU features: detected: Kernel page table isolation (KPTI)
Sep  4 17:10:34.902703 kernel: CPU features: detected: ARM erratum 1418040
Sep  4 17:10:34.902710 kernel: CPU features: detected: SSBS not fully self-synchronizing
Sep  4 17:10:34.902716 kernel: alternatives: applying boot alternatives
Sep  4 17:10:34.902724 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=7913866621ae0af53522ae1b4ff4e1e453dd69d966d437a439147039341ecbbc
Sep  4 17:10:34.902731 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Sep  4 17:10:34.902738 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Sep  4 17:10:34.902745 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Sep  4 17:10:34.902752 kernel: Fallback order for Node 0: 0 
Sep  4 17:10:34.902759 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 633024
Sep  4 17:10:34.902765 kernel: Policy zone: DMA
Sep  4 17:10:34.902772 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Sep  4 17:10:34.902780 kernel: software IO TLB: area num 4.
Sep  4 17:10:34.902787 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB)
Sep  4 17:10:34.902794 kernel: Memory: 2386852K/2572288K available (10240K kernel code, 2182K rwdata, 8076K rodata, 39040K init, 897K bss, 185436K reserved, 0K cma-reserved)
Sep  4 17:10:34.902801 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
Sep  4 17:10:34.902808 kernel: trace event string verifier disabled
Sep  4 17:10:34.902815 kernel: rcu: Preemptible hierarchical RCU implementation.
Sep  4 17:10:34.902822 kernel: rcu:         RCU event tracing is enabled.
Sep  4 17:10:34.902829 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4.
Sep  4 17:10:34.902836 kernel:         Trampoline variant of Tasks RCU enabled.
Sep  4 17:10:34.902843 kernel:         Tracing variant of Tasks RCU enabled.
Sep  4 17:10:34.902850 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Sep  4 17:10:34.902857 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
Sep  4 17:10:34.902865 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
Sep  4 17:10:34.902872 kernel: GICv3: 256 SPIs implemented
Sep  4 17:10:34.902878 kernel: GICv3: 0 Extended SPIs implemented
Sep  4 17:10:34.902885 kernel: Root IRQ handler: gic_handle_irq
Sep  4 17:10:34.902892 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI
Sep  4 17:10:34.902899 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000
Sep  4 17:10:34.902905 kernel: ITS [mem 0x08080000-0x0809ffff]
Sep  4 17:10:34.902912 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1)
Sep  4 17:10:34.902919 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1)
Sep  4 17:10:34.902926 kernel: GICv3: using LPI property table @0x00000000400f0000
Sep  4 17:10:34.902933 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000
Sep  4 17:10:34.902941 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Sep  4 17:10:34.902948 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Sep  4 17:10:34.902955 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt).
Sep  4 17:10:34.902962 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns
Sep  4 17:10:34.902969 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns
Sep  4 17:10:34.902976 kernel: arm-pv: using stolen time PV
Sep  4 17:10:34.902983 kernel: Console: colour dummy device 80x25
Sep  4 17:10:34.902990 kernel: ACPI: Core revision 20230628
Sep  4 17:10:34.902997 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000)
Sep  4 17:10:34.903004 kernel: pid_max: default: 32768 minimum: 301
Sep  4 17:10:34.903012 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity
Sep  4 17:10:34.903019 kernel: SELinux:  Initializing.
Sep  4 17:10:34.903026 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Sep  4 17:10:34.903034 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Sep  4 17:10:34.903041 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1.
Sep  4 17:10:34.903048 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1.
Sep  4 17:10:34.903054 kernel: rcu: Hierarchical SRCU implementation.
Sep  4 17:10:34.903062 kernel: rcu:         Max phase no-delay instances is 400.
Sep  4 17:10:34.903068 kernel: Platform MSI: ITS@0x8080000 domain created
Sep  4 17:10:34.903077 kernel: PCI/MSI: ITS@0x8080000 domain created
Sep  4 17:10:34.903084 kernel: Remapping and enabling EFI services.
Sep  4 17:10:34.903091 kernel: smp: Bringing up secondary CPUs ...
Sep  4 17:10:34.903098 kernel: Detected PIPT I-cache on CPU1
Sep  4 17:10:34.903105 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000
Sep  4 17:10:34.903112 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000
Sep  4 17:10:34.903119 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Sep  4 17:10:34.903126 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1]
Sep  4 17:10:34.903133 kernel: Detected PIPT I-cache on CPU2
Sep  4 17:10:34.903140 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000
Sep  4 17:10:34.903148 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000
Sep  4 17:10:34.903155 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Sep  4 17:10:34.903167 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1]
Sep  4 17:10:34.903176 kernel: Detected PIPT I-cache on CPU3
Sep  4 17:10:34.903183 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000
Sep  4 17:10:34.903190 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000
Sep  4 17:10:34.903198 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Sep  4 17:10:34.903205 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1]
Sep  4 17:10:34.903212 kernel: smp: Brought up 1 node, 4 CPUs
Sep  4 17:10:34.903221 kernel: SMP: Total of 4 processors activated.
Sep  4 17:10:34.903228 kernel: CPU features: detected: 32-bit EL0 Support
Sep  4 17:10:34.903236 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence
Sep  4 17:10:34.903243 kernel: CPU features: detected: Common not Private translations
Sep  4 17:10:34.903250 kernel: CPU features: detected: CRC32 instructions
Sep  4 17:10:34.903258 kernel: CPU features: detected: Enhanced Virtualization Traps
Sep  4 17:10:34.903265 kernel: CPU features: detected: RCpc load-acquire (LDAPR)
Sep  4 17:10:34.903272 kernel: CPU features: detected: LSE atomic instructions
Sep  4 17:10:34.903281 kernel: CPU features: detected: Privileged Access Never
Sep  4 17:10:34.903288 kernel: CPU features: detected: RAS Extension Support
Sep  4 17:10:34.903295 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS)
Sep  4 17:10:34.903303 kernel: CPU: All CPU(s) started at EL1
Sep  4 17:10:34.903317 kernel: alternatives: applying system-wide alternatives
Sep  4 17:10:34.903324 kernel: devtmpfs: initialized
Sep  4 17:10:34.903331 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Sep  4 17:10:34.903339 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear)
Sep  4 17:10:34.903346 kernel: pinctrl core: initialized pinctrl subsystem
Sep  4 17:10:34.903355 kernel: SMBIOS 3.0.0 present.
Sep  4 17:10:34.903363 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023
Sep  4 17:10:34.903370 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Sep  4 17:10:34.903378 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations
Sep  4 17:10:34.903385 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Sep  4 17:10:34.903393 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Sep  4 17:10:34.903400 kernel: audit: initializing netlink subsys (disabled)
Sep  4 17:10:34.903408 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1
Sep  4 17:10:34.903415 kernel: thermal_sys: Registered thermal governor 'step_wise'
Sep  4 17:10:34.903423 kernel: cpuidle: using governor menu
Sep  4 17:10:34.903431 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.
Sep  4 17:10:34.903438 kernel: ASID allocator initialised with 32768 entries
Sep  4 17:10:34.903446 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Sep  4 17:10:34.903453 kernel: Serial: AMBA PL011 UART driver
Sep  4 17:10:34.903460 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL
Sep  4 17:10:34.903468 kernel: Modules: 0 pages in range for non-PLT usage
Sep  4 17:10:34.903475 kernel: Modules: 509120 pages in range for PLT usage
Sep  4 17:10:34.903486 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Sep  4 17:10:34.903497 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page
Sep  4 17:10:34.903505 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages
Sep  4 17:10:34.903512 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page
Sep  4 17:10:34.903519 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Sep  4 17:10:34.903526 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page
Sep  4 17:10:34.903533 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages
Sep  4 17:10:34.903540 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page
Sep  4 17:10:34.903551 kernel: ACPI: Added _OSI(Module Device)
Sep  4 17:10:34.903558 kernel: ACPI: Added _OSI(Processor Device)
Sep  4 17:10:34.903567 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Sep  4 17:10:34.903575 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Sep  4 17:10:34.903582 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Sep  4 17:10:34.903590 kernel: ACPI: Interpreter enabled
Sep  4 17:10:34.903597 kernel: ACPI: Using GIC for interrupt routing
Sep  4 17:10:34.903604 kernel: ACPI: MCFG table detected, 1 entries
Sep  4 17:10:34.903611 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA
Sep  4 17:10:34.903618 kernel: printk: console [ttyAMA0] enabled
Sep  4 17:10:34.903626 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Sep  4 17:10:34.903764 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3]
Sep  4 17:10:34.903841 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR]
Sep  4 17:10:34.903905 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability]
Sep  4 17:10:34.903967 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00
Sep  4 17:10:34.904028 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff]
Sep  4 17:10:34.904038 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io  0x0000-0xffff window]
Sep  4 17:10:34.904045 kernel: PCI host bridge to bus 0000:00
Sep  4 17:10:34.904114 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window]
Sep  4 17:10:34.904172 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0xffff window]
Sep  4 17:10:34.904228 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window]
Sep  4 17:10:34.904316 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Sep  4 17:10:34.904401 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000
Sep  4 17:10:34.904476 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00
Sep  4 17:10:34.904553 kernel: pci 0000:00:01.0: reg 0x10: [io  0x0000-0x001f]
Sep  4 17:10:34.904622 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff]
Sep  4 17:10:34.904687 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref]
Sep  4 17:10:34.904751 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref]
Sep  4 17:10:34.904815 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff]
Sep  4 17:10:34.904879 kernel: pci 0000:00:01.0: BAR 0: assigned [io  0x1000-0x101f]
Sep  4 17:10:34.904936 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window]
Sep  4 17:10:34.904991 kernel: pci_bus 0000:00: resource 5 [io  0x0000-0xffff window]
Sep  4 17:10:34.905050 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window]
Sep  4 17:10:34.905059 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35
Sep  4 17:10:34.905067 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36
Sep  4 17:10:34.905074 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37
Sep  4 17:10:34.905082 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38
Sep  4 17:10:34.905089 kernel: iommu: Default domain type: Translated
Sep  4 17:10:34.905097 kernel: iommu: DMA domain TLB invalidation policy: strict mode
Sep  4 17:10:34.905104 kernel: efivars: Registered efivars operations
Sep  4 17:10:34.905113 kernel: vgaarb: loaded
Sep  4 17:10:34.905120 kernel: clocksource: Switched to clocksource arch_sys_counter
Sep  4 17:10:34.905127 kernel: VFS: Disk quotas dquot_6.6.0
Sep  4 17:10:34.905135 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Sep  4 17:10:34.905142 kernel: pnp: PnP ACPI init
Sep  4 17:10:34.905214 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved
Sep  4 17:10:34.905224 kernel: pnp: PnP ACPI: found 1 devices
Sep  4 17:10:34.905232 kernel: NET: Registered PF_INET protocol family
Sep  4 17:10:34.905241 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear)
Sep  4 17:10:34.905249 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear)
Sep  4 17:10:34.905256 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Sep  4 17:10:34.905263 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear)
Sep  4 17:10:34.905271 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear)
Sep  4 17:10:34.905278 kernel: TCP: Hash tables configured (established 32768 bind 32768)
Sep  4 17:10:34.905286 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear)
Sep  4 17:10:34.905293 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear)
Sep  4 17:10:34.905300 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Sep  4 17:10:34.905319 kernel: PCI: CLS 0 bytes, default 64
Sep  4 17:10:34.905328 kernel: kvm [1]: HYP mode not available
Sep  4 17:10:34.905335 kernel: Initialise system trusted keyrings
Sep  4 17:10:34.905343 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0
Sep  4 17:10:34.905350 kernel: Key type asymmetric registered
Sep  4 17:10:34.905357 kernel: Asymmetric key parser 'x509' registered
Sep  4 17:10:34.905365 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250)
Sep  4 17:10:34.905372 kernel: io scheduler mq-deadline registered
Sep  4 17:10:34.905379 kernel: io scheduler kyber registered
Sep  4 17:10:34.905388 kernel: io scheduler bfq registered
Sep  4 17:10:34.905396 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0
Sep  4 17:10:34.905403 kernel: ACPI: button: Power Button [PWRB]
Sep  4 17:10:34.905411 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36
Sep  4 17:10:34.905489 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007)
Sep  4 17:10:34.905501 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Sep  4 17:10:34.905508 kernel: thunder_xcv, ver 1.0
Sep  4 17:10:34.905516 kernel: thunder_bgx, ver 1.0
Sep  4 17:10:34.905523 kernel: nicpf, ver 1.0
Sep  4 17:10:34.905530 kernel: nicvf, ver 1.0
Sep  4 17:10:34.905611 kernel: rtc-efi rtc-efi.0: registered as rtc0
Sep  4 17:10:34.905675 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-09-04T17:10:34 UTC (1725469834)
Sep  4 17:10:34.905685 kernel: hid: raw HID events driver (C) Jiri Kosina
Sep  4 17:10:34.905692 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available
Sep  4 17:10:34.905699 kernel: watchdog: Delayed init of the lockup detector failed: -19
Sep  4 17:10:34.905707 kernel: watchdog: Hard watchdog permanently disabled
Sep  4 17:10:34.905714 kernel: NET: Registered PF_INET6 protocol family
Sep  4 17:10:34.905724 kernel: Segment Routing with IPv6
Sep  4 17:10:34.905731 kernel: In-situ OAM (IOAM) with IPv6
Sep  4 17:10:34.905738 kernel: NET: Registered PF_PACKET protocol family
Sep  4 17:10:34.905745 kernel: Key type dns_resolver registered
Sep  4 17:10:34.905753 kernel: registered taskstats version 1
Sep  4 17:10:34.905760 kernel: Loading compiled-in X.509 certificates
Sep  4 17:10:34.905767 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 1f5b9f288f9cae6ec9698678cdc0f614482066f7'
Sep  4 17:10:34.905774 kernel: Key type .fscrypt registered
Sep  4 17:10:34.905781 kernel: Key type fscrypt-provisioning registered
Sep  4 17:10:34.905789 kernel: ima: No TPM chip found, activating TPM-bypass!
Sep  4 17:10:34.905798 kernel: ima: Allocated hash algorithm: sha1
Sep  4 17:10:34.905805 kernel: ima: No architecture policies found
Sep  4 17:10:34.905812 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng)
Sep  4 17:10:34.905819 kernel: clk: Disabling unused clocks
Sep  4 17:10:34.905826 kernel: Freeing unused kernel memory: 39040K
Sep  4 17:10:34.905834 kernel: Run /init as init process
Sep  4 17:10:34.905841 kernel:   with arguments:
Sep  4 17:10:34.905848 kernel:     /init
Sep  4 17:10:34.905856 kernel:   with environment:
Sep  4 17:10:34.905863 kernel:     HOME=/
Sep  4 17:10:34.905871 kernel:     TERM=linux
Sep  4 17:10:34.905878 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Sep  4 17:10:34.905886 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Sep  4 17:10:34.905896 systemd[1]: Detected virtualization kvm.
Sep  4 17:10:34.905904 systemd[1]: Detected architecture arm64.
Sep  4 17:10:34.905912 systemd[1]: Running in initrd.
Sep  4 17:10:34.905921 systemd[1]: No hostname configured, using default hostname.
Sep  4 17:10:34.905929 systemd[1]: Hostname set to <localhost>.
Sep  4 17:10:34.905937 systemd[1]: Initializing machine ID from VM UUID.
Sep  4 17:10:34.905945 systemd[1]: Queued start job for default target initrd.target.
Sep  4 17:10:34.905953 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Sep  4 17:10:34.905961 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Sep  4 17:10:34.905969 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM...
Sep  4 17:10:34.905977 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Sep  4 17:10:34.905987 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT...
Sep  4 17:10:34.905995 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A...
Sep  4 17:10:34.906004 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132...
Sep  4 17:10:34.906012 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr...
Sep  4 17:10:34.906020 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Sep  4 17:10:34.906028 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Sep  4 17:10:34.906036 systemd[1]: Reached target paths.target - Path Units.
Sep  4 17:10:34.906046 systemd[1]: Reached target slices.target - Slice Units.
Sep  4 17:10:34.906054 systemd[1]: Reached target swap.target - Swaps.
Sep  4 17:10:34.906061 systemd[1]: Reached target timers.target - Timer Units.
Sep  4 17:10:34.906070 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Sep  4 17:10:34.906078 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Sep  4 17:10:34.906087 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Sep  4 17:10:34.906095 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Sep  4 17:10:34.906103 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Sep  4 17:10:34.906113 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Sep  4 17:10:34.906122 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Sep  4 17:10:34.906130 systemd[1]: Reached target sockets.target - Socket Units.
Sep  4 17:10:34.906138 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup...
Sep  4 17:10:34.906146 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Sep  4 17:10:34.906154 systemd[1]: Finished network-cleanup.service - Network Cleanup.
Sep  4 17:10:34.906162 systemd[1]: Starting systemd-fsck-usr.service...
Sep  4 17:10:34.906170 systemd[1]: Starting systemd-journald.service - Journal Service...
Sep  4 17:10:34.906178 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Sep  4 17:10:34.906188 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Sep  4 17:10:34.906196 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup.
Sep  4 17:10:34.906204 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Sep  4 17:10:34.906212 systemd[1]: Finished systemd-fsck-usr.service.
Sep  4 17:10:34.906220 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Sep  4 17:10:34.906244 systemd-journald[237]: Collecting audit messages is disabled.
Sep  4 17:10:34.906264 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Sep  4 17:10:34.906272 systemd-journald[237]: Journal started
Sep  4 17:10:34.906293 systemd-journald[237]: Runtime Journal (/run/log/journal/8886928be0f843bcabb1616f6057dda3) is 5.9M, max 47.3M, 41.4M free.
Sep  4 17:10:34.895401 systemd-modules-load[239]: Inserted module 'overlay'
Sep  4 17:10:34.908384 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Sep  4 17:10:34.912894 systemd[1]: Started systemd-journald.service - Journal Service.
Sep  4 17:10:34.911605 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Sep  4 17:10:34.914896 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Sep  4 17:10:34.918169 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Sep  4 17:10:34.918190 kernel: Bridge firewalling registered
Sep  4 17:10:34.918700 systemd-modules-load[239]: Inserted module 'br_netfilter'
Sep  4 17:10:34.920434 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories...
Sep  4 17:10:34.923359 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Sep  4 17:10:34.924618 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Sep  4 17:10:34.928360 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Sep  4 17:10:34.931355 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories.
Sep  4 17:10:34.936011 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Sep  4 17:10:34.949498 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook...
Sep  4 17:10:34.950594 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Sep  4 17:10:34.955592 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Sep  4 17:10:34.962320 dracut-cmdline[277]: dracut-dracut-053
Sep  4 17:10:34.964783 dracut-cmdline[277]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=7913866621ae0af53522ae1b4ff4e1e453dd69d966d437a439147039341ecbbc
Sep  4 17:10:34.994202 systemd-resolved[282]: Positive Trust Anchors:
Sep  4 17:10:34.994224 systemd-resolved[282]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Sep  4 17:10:34.994254 systemd-resolved[282]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test
Sep  4 17:10:34.998818 systemd-resolved[282]: Defaulting to hostname 'linux'.
Sep  4 17:10:34.999826 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Sep  4 17:10:35.003399 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Sep  4 17:10:35.042341 kernel: SCSI subsystem initialized
Sep  4 17:10:35.047325 kernel: Loading iSCSI transport class v2.0-870.
Sep  4 17:10:35.055341 kernel: iscsi: registered transport (tcp)
Sep  4 17:10:35.068332 kernel: iscsi: registered transport (qla4xxx)
Sep  4 17:10:35.068352 kernel: QLogic iSCSI HBA Driver
Sep  4 17:10:35.119987 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook.
Sep  4 17:10:35.133641 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook...
Sep  4 17:10:35.152345 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Sep  4 17:10:35.154183 kernel: device-mapper: uevent: version 1.0.3
Sep  4 17:10:35.154245 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Sep  4 17:10:35.211384 kernel: raid6: neonx8   gen() 10751 MB/s
Sep  4 17:10:35.227384 kernel: raid6: neonx4   gen() 15594 MB/s
Sep  4 17:10:35.244386 kernel: raid6: neonx2   gen() 13173 MB/s
Sep  4 17:10:35.264548 kernel: raid6: neonx1   gen() 10422 MB/s
Sep  4 17:10:35.278366 kernel: raid6: int64x8  gen()  6936 MB/s
Sep  4 17:10:35.295391 kernel: raid6: int64x4  gen()  7331 MB/s
Sep  4 17:10:35.312387 kernel: raid6: int64x2  gen()  6117 MB/s
Sep  4 17:10:35.329456 kernel: raid6: int64x1  gen()  5041 MB/s
Sep  4 17:10:35.329559 kernel: raid6: using algorithm neonx4 gen() 15594 MB/s
Sep  4 17:10:35.347478 kernel: raid6: .... xor() 12061 MB/s, rmw enabled
Sep  4 17:10:35.347561 kernel: raid6: using neon recovery algorithm
Sep  4 17:10:35.355363 kernel: xor: measuring software checksum speed
Sep  4 17:10:35.356358 kernel:    8regs           : 19854 MB/sec
Sep  4 17:10:35.357364 kernel:    32regs          : 19682 MB/sec
Sep  4 17:10:35.358679 kernel:    arm64_neon      : 26597 MB/sec
Sep  4 17:10:35.358717 kernel: xor: using function: arm64_neon (26597 MB/sec)
Sep  4 17:10:35.412390 kernel: Btrfs loaded, zoned=no, fsverity=no
Sep  4 17:10:35.428295 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook.
Sep  4 17:10:35.436681 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Sep  4 17:10:35.449935 systemd-udevd[462]: Using default interface naming scheme 'v255'.
Sep  4 17:10:35.453336 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Sep  4 17:10:35.464281 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook...
Sep  4 17:10:35.478718 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation
Sep  4 17:10:35.522917 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook.
Sep  4 17:10:35.538956 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Sep  4 17:10:35.587399 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Sep  4 17:10:35.600510 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
Sep  4 17:10:35.617101 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook.
Sep  4 17:10:35.619145 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Sep  4 17:10:35.620888 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Sep  4 17:10:35.623848 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Sep  4 17:10:35.632596 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Sep  4 17:10:35.649110 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues
Sep  4 17:10:35.649984 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB)
Sep  4 17:10:35.653988 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook.
Sep  4 17:10:35.667504 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Sep  4 17:10:35.667574 kernel: GPT:9289727 != 19775487
Sep  4 17:10:35.667584 kernel: GPT:Alternate GPT header not at the end of the disk.
Sep  4 17:10:35.667595 kernel: GPT:9289727 != 19775487
Sep  4 17:10:35.668567 kernel: GPT: Use GNU Parted to correct GPT errors.
Sep  4 17:10:35.669088 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Sep  4 17:10:35.685332 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Sep  4 17:10:35.669195 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Sep  4 17:10:35.687577 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Sep  4 17:10:35.689218 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Sep  4 17:10:35.689306 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Sep  4 17:10:35.691905 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Sep  4 17:10:35.701590 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Sep  4 17:10:35.717550 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (522)
Sep  4 17:10:35.717383 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Sep  4 17:10:35.722755 kernel: BTRFS: device fsid 2be47701-3393-455e-86fc-33755ceb9c20 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (514)
Sep  4 17:10:35.727703 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT.
Sep  4 17:10:35.735255 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM.
Sep  4 17:10:35.740821 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM.
Sep  4 17:10:35.744968 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132.
Sep  4 17:10:35.747109 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A.
Sep  4 17:10:35.768630 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary...
Sep  4 17:10:35.770631 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Sep  4 17:10:35.776647 disk-uuid[554]: Primary Header is updated.
Sep  4 17:10:35.776647 disk-uuid[554]: Secondary Entries is updated.
Sep  4 17:10:35.776647 disk-uuid[554]: Secondary Header is updated.
Sep  4 17:10:35.784343 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Sep  4 17:10:35.801634 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Sep  4 17:10:36.801580 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Sep  4 17:10:36.802619 disk-uuid[555]: The operation has completed successfully.
Sep  4 17:10:36.831190 systemd[1]: disk-uuid.service: Deactivated successfully.
Sep  4 17:10:36.831288 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary.
Sep  4 17:10:36.852563 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr...
Sep  4 17:10:36.856537 sh[576]: Success
Sep  4 17:10:36.872367 kernel: device-mapper: verity: sha256 using implementation "sha256-ce"
Sep  4 17:10:36.908276 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr.
Sep  4 17:10:36.917828 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr...
Sep  4 17:10:36.920353 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr.
Sep  4 17:10:36.933322 kernel: BTRFS info (device dm-0): first mount of filesystem 2be47701-3393-455e-86fc-33755ceb9c20
Sep  4 17:10:36.933373 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm
Sep  4 17:10:36.933384 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
Sep  4 17:10:36.934880 kernel: BTRFS info (device dm-0): disabling log replay at mount time
Sep  4 17:10:36.934898 kernel: BTRFS info (device dm-0): using free space tree
Sep  4 17:10:36.940280 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr.
Sep  4 17:10:36.941759 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met.
Sep  4 17:10:36.951497 systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Sep  4 17:10:36.953219 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline...
Sep  4 17:10:36.966727 kernel: BTRFS info (device vda6): first mount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0
Sep  4 17:10:36.966789 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm
Sep  4 17:10:36.966800 kernel: BTRFS info (device vda6): using free space tree
Sep  4 17:10:36.970784 kernel: BTRFS info (device vda6): auto enabling async discard
Sep  4 17:10:36.978709 systemd[1]: mnt-oem.mount: Deactivated successfully.
Sep  4 17:10:36.980489 kernel: BTRFS info (device vda6): last unmount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0
Sep  4 17:10:36.989928 systemd[1]: Finished ignition-setup.service - Ignition (setup).
Sep  4 17:10:36.997524 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)...
Sep  4 17:10:37.067082 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Sep  4 17:10:37.095625 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Sep  4 17:10:37.137139 systemd-networkd[761]: lo: Link UP
Sep  4 17:10:37.137149 systemd-networkd[761]: lo: Gained carrier
Sep  4 17:10:37.137869 systemd-networkd[761]: Enumeration completed
Sep  4 17:10:37.138169 systemd[1]: Started systemd-networkd.service - Network Configuration.
Sep  4 17:10:37.139414 systemd[1]: Reached target network.target - Network.
Sep  4 17:10:37.140674 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Sep  4 17:10:37.140677 systemd-networkd[761]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Sep  4 17:10:37.142911 systemd-networkd[761]: eth0: Link UP
Sep  4 17:10:37.142915 systemd-networkd[761]: eth0: Gained carrier
Sep  4 17:10:37.142923 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Sep  4 17:10:37.163515 ignition[675]: Ignition 2.18.0
Sep  4 17:10:37.163526 ignition[675]: Stage: fetch-offline
Sep  4 17:10:37.163566 ignition[675]: no configs at "/usr/lib/ignition/base.d"
Sep  4 17:10:37.163574 ignition[675]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Sep  4 17:10:37.163669 ignition[675]: parsed url from cmdline: ""
Sep  4 17:10:37.167404 systemd-networkd[761]: eth0: DHCPv4 address 10.0.0.33/16, gateway 10.0.0.1 acquired from 10.0.0.1
Sep  4 17:10:37.163672 ignition[675]: no config URL provided
Sep  4 17:10:37.163678 ignition[675]: reading system config file "/usr/lib/ignition/user.ign"
Sep  4 17:10:37.163685 ignition[675]: no config at "/usr/lib/ignition/user.ign"
Sep  4 17:10:37.163713 ignition[675]: op(1): [started]  loading QEMU firmware config module
Sep  4 17:10:37.163718 ignition[675]: op(1): executing: "modprobe" "qemu_fw_cfg"
Sep  4 17:10:37.175427 ignition[675]: op(1): [finished] loading QEMU firmware config module
Sep  4 17:10:37.214667 ignition[675]: parsing config with SHA512: b0a909d84c6ce17e7d153239e10a88559383f1841967be3e16a7a64247357a4563683b6b632d2a5b64dee21732e5f22e9ef1eb65c693cc7f4cf53ea92af4d1b7
Sep  4 17:10:37.221480 unknown[675]: fetched base config from "system"
Sep  4 17:10:37.221493 unknown[675]: fetched user config from "qemu"
Sep  4 17:10:37.222033 ignition[675]: fetch-offline: fetch-offline passed
Sep  4 17:10:37.223384 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline).
Sep  4 17:10:37.222103 ignition[675]: Ignition finished successfully
Sep  4 17:10:37.225054 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json).
Sep  4 17:10:37.234552 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)...
Sep  4 17:10:37.246144 ignition[773]: Ignition 2.18.0
Sep  4 17:10:37.246157 ignition[773]: Stage: kargs
Sep  4 17:10:37.246570 ignition[773]: no configs at "/usr/lib/ignition/base.d"
Sep  4 17:10:37.246582 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Sep  4 17:10:37.247504 ignition[773]: kargs: kargs passed
Sep  4 17:10:37.250002 systemd[1]: Finished ignition-kargs.service - Ignition (kargs).
Sep  4 17:10:37.247558 ignition[773]: Ignition finished successfully
Sep  4 17:10:37.258523 systemd[1]: Starting ignition-disks.service - Ignition (disks)...
Sep  4 17:10:37.270444 ignition[783]: Ignition 2.18.0
Sep  4 17:10:37.270454 ignition[783]: Stage: disks
Sep  4 17:10:37.270674 ignition[783]: no configs at "/usr/lib/ignition/base.d"
Sep  4 17:10:37.270684 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Sep  4 17:10:37.271606 ignition[783]: disks: disks passed
Sep  4 17:10:37.271658 ignition[783]: Ignition finished successfully
Sep  4 17:10:37.275360 systemd[1]: Finished ignition-disks.service - Ignition (disks).
Sep  4 17:10:37.276599 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device.
Sep  4 17:10:37.278073 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Sep  4 17:10:37.280020 systemd[1]: Reached target local-fs.target - Local File Systems.
Sep  4 17:10:37.281849 systemd[1]: Reached target sysinit.target - System Initialization.
Sep  4 17:10:37.283811 systemd[1]: Reached target basic.target - Basic System.
Sep  4 17:10:37.295493 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT...
Sep  4 17:10:37.311390 systemd-fsck[794]: ROOT: clean, 14/553520 files, 52654/553472 blocks
Sep  4 17:10:37.318133 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT.
Sep  4 17:10:37.327442 systemd[1]: Mounting sysroot.mount - /sysroot...
Sep  4 17:10:37.370335 kernel: EXT4-fs (vda9): mounted filesystem f2f4f3ba-c5a3-49c0-ace4-444935e9934b r/w with ordered data mode. Quota mode: none.
Sep  4 17:10:37.370784 systemd[1]: Mounted sysroot.mount - /sysroot.
Sep  4 17:10:37.372174 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System.
Sep  4 17:10:37.383408 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Sep  4 17:10:37.386039 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr...
Sep  4 17:10:37.387154 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met.
Sep  4 17:10:37.387199 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Sep  4 17:10:37.387223 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup.
Sep  4 17:10:37.393777 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr.
Sep  4 17:10:37.397844 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup...
Sep  4 17:10:37.399941 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (802)
Sep  4 17:10:37.402482 kernel: BTRFS info (device vda6): first mount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0
Sep  4 17:10:37.402504 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm
Sep  4 17:10:37.402515 kernel: BTRFS info (device vda6): using free space tree
Sep  4 17:10:37.406329 kernel: BTRFS info (device vda6): auto enabling async discard
Sep  4 17:10:37.408125 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Sep  4 17:10:37.448949 initrd-setup-root[827]: cut: /sysroot/etc/passwd: No such file or directory
Sep  4 17:10:37.453621 initrd-setup-root[834]: cut: /sysroot/etc/group: No such file or directory
Sep  4 17:10:37.458014 initrd-setup-root[841]: cut: /sysroot/etc/shadow: No such file or directory
Sep  4 17:10:37.462848 initrd-setup-root[848]: cut: /sysroot/etc/gshadow: No such file or directory
Sep  4 17:10:37.534178 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup.
Sep  4 17:10:37.542488 systemd[1]: Starting ignition-mount.service - Ignition (mount)...
Sep  4 17:10:37.546021 systemd[1]: Starting sysroot-boot.service - /sysroot/boot...
Sep  4 17:10:37.550335 kernel: BTRFS info (device vda6): last unmount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0
Sep  4 17:10:37.565678 systemd[1]: Finished sysroot-boot.service - /sysroot/boot.
Sep  4 17:10:37.568860 ignition[917]: INFO     : Ignition 2.18.0
Sep  4 17:10:37.568860 ignition[917]: INFO     : Stage: mount
Sep  4 17:10:37.570488 ignition[917]: INFO     : no configs at "/usr/lib/ignition/base.d"
Sep  4 17:10:37.570488 ignition[917]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Sep  4 17:10:37.573341 ignition[917]: INFO     : mount: mount passed
Sep  4 17:10:37.573341 ignition[917]: INFO     : Ignition finished successfully
Sep  4 17:10:37.572863 systemd[1]: Finished ignition-mount.service - Ignition (mount).
Sep  4 17:10:37.583801 systemd[1]: Starting ignition-files.service - Ignition (files)...
Sep  4 17:10:37.931247 systemd[1]: sysroot-oem.mount: Deactivated successfully.
Sep  4 17:10:37.944505 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Sep  4 17:10:37.951229 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (931)
Sep  4 17:10:37.951263 kernel: BTRFS info (device vda6): first mount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0
Sep  4 17:10:37.951276 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm
Sep  4 17:10:37.952100 kernel: BTRFS info (device vda6): using free space tree
Sep  4 17:10:37.954340 kernel: BTRFS info (device vda6): auto enabling async discard
Sep  4 17:10:37.955878 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Sep  4 17:10:37.978642 ignition[948]: INFO     : Ignition 2.18.0
Sep  4 17:10:37.978642 ignition[948]: INFO     : Stage: files
Sep  4 17:10:37.980344 ignition[948]: INFO     : no configs at "/usr/lib/ignition/base.d"
Sep  4 17:10:37.980344 ignition[948]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Sep  4 17:10:37.980344 ignition[948]: DEBUG    : files: compiled without relabeling support, skipping
Sep  4 17:10:37.983804 ignition[948]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Sep  4 17:10:37.983804 ignition[948]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Sep  4 17:10:37.986588 ignition[948]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Sep  4 17:10:37.986588 ignition[948]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Sep  4 17:10:37.986588 ignition[948]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Sep  4 17:10:37.986588 ignition[948]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz"
Sep  4 17:10:37.986588 ignition[948]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1
Sep  4 17:10:37.984854 unknown[948]: wrote ssh authorized keys file for user: core
Sep  4 17:10:38.248795 ignition[948]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET result: OK
Sep  4 17:10:38.299825 ignition[948]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz"
Sep  4 17:10:38.299825 ignition[948]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/home/core/install.sh"
Sep  4 17:10:38.303664 ignition[948]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh"
Sep  4 17:10:38.303664 ignition[948]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/home/core/nginx.yaml"
Sep  4 17:10:38.303664 ignition[948]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml"
Sep  4 17:10:38.303664 ignition[948]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/home/core/nfs-pod.yaml"
Sep  4 17:10:38.303664 ignition[948]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml"
Sep  4 17:10:38.303664 ignition[948]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/home/core/nfs-pvc.yaml"
Sep  4 17:10:38.303664 ignition[948]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml"
Sep  4 17:10:38.303664 ignition[948]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Sep  4 17:10:38.303664 ignition[948]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Sep  4 17:10:38.303664 ignition[948]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw"
Sep  4 17:10:38.303664 ignition[948]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw"
Sep  4 17:10:38.303664 ignition[948]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw"
Sep  4 17:10:38.303664 ignition[948]: INFO     : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-arm64.raw: attempt #1
Sep  4 17:10:38.617892 ignition[948]: INFO     : files: createFilesystemsFiles: createFiles: op(a): GET result: OK
Sep  4 17:10:38.895711 ignition[948]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw"
Sep  4 17:10:38.895711 ignition[948]: INFO     : files: op(b): [started]  processing unit "prepare-helm.service"
Sep  4 17:10:38.899356 ignition[948]: INFO     : files: op(b): op(c): [started]  writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Sep  4 17:10:38.899356 ignition[948]: INFO     : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Sep  4 17:10:38.899356 ignition[948]: INFO     : files: op(b): [finished] processing unit "prepare-helm.service"
Sep  4 17:10:38.899356 ignition[948]: INFO     : files: op(d): [started]  processing unit "coreos-metadata.service"
Sep  4 17:10:38.899356 ignition[948]: INFO     : files: op(d): op(e): [started]  writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Sep  4 17:10:38.899356 ignition[948]: INFO     : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Sep  4 17:10:38.899356 ignition[948]: INFO     : files: op(d): [finished] processing unit "coreos-metadata.service"
Sep  4 17:10:38.899356 ignition[948]: INFO     : files: op(f): [started]  setting preset to disabled for "coreos-metadata.service"
Sep  4 17:10:38.928472 ignition[948]: INFO     : files: op(f): op(10): [started]  removing enablement symlink(s) for "coreos-metadata.service"
Sep  4 17:10:38.933105 ignition[948]: INFO     : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service"
Sep  4 17:10:38.934813 ignition[948]: INFO     : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service"
Sep  4 17:10:38.934813 ignition[948]: INFO     : files: op(11): [started]  setting preset to enabled for "prepare-helm.service"
Sep  4 17:10:38.934813 ignition[948]: INFO     : files: op(11): [finished] setting preset to enabled for "prepare-helm.service"
Sep  4 17:10:38.934813 ignition[948]: INFO     : files: createResultFile: createFiles: op(12): [started]  writing file "/sysroot/etc/.ignition-result.json"
Sep  4 17:10:38.934813 ignition[948]: INFO     : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json"
Sep  4 17:10:38.934813 ignition[948]: INFO     : files: files passed
Sep  4 17:10:38.934813 ignition[948]: INFO     : Ignition finished successfully
Sep  4 17:10:38.935083 systemd[1]: Finished ignition-files.service - Ignition (files).
Sep  4 17:10:38.948495 systemd[1]: Starting ignition-quench.service - Ignition (record completion)...
Sep  4 17:10:38.950787 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion...
Sep  4 17:10:38.952529 systemd[1]: ignition-quench.service: Deactivated successfully.
Sep  4 17:10:38.952613 systemd[1]: Finished ignition-quench.service - Ignition (record completion).
Sep  4 17:10:38.958834 initrd-setup-root-after-ignition[976]: grep: /sysroot/oem/oem-release: No such file or directory
Sep  4 17:10:38.962579 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Sep  4 17:10:38.962579 initrd-setup-root-after-ignition[978]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory
Sep  4 17:10:38.965572 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Sep  4 17:10:38.965163 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion.
Sep  4 17:10:38.966783 systemd[1]: Reached target ignition-complete.target - Ignition Complete.
Sep  4 17:10:38.981554 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root...
Sep  4 17:10:39.002154 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Sep  4 17:10:39.002289 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Sep  4 17:10:39.004515 systemd[1]: Reached target initrd-fs.target - Initrd File Systems.
Sep  4 17:10:39.006370 systemd[1]: Reached target initrd.target - Initrd Default Target.
Sep  4 17:10:39.008088 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met.
Sep  4 17:10:39.008925 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Sep  4 17:10:39.024499 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Sep  4 17:10:39.032702 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons...
Sep  4 17:10:39.040939 systemd[1]: Stopped target network.target - Network.
Sep  4 17:10:39.041938 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups.
Sep  4 17:10:39.043684 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes.
Sep  4 17:10:39.045614 systemd[1]: Stopped target timers.target - Timer Units.
Sep  4 17:10:39.047245 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Sep  4 17:10:39.047388 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Sep  4 17:10:39.049820 systemd[1]: Stopped target initrd.target - Initrd Default Target.
Sep  4 17:10:39.051762 systemd[1]: Stopped target basic.target - Basic System.
Sep  4 17:10:39.053492 systemd[1]: Stopped target ignition-complete.target - Ignition Complete.
Sep  4 17:10:39.055317 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup.
Sep  4 17:10:39.057139 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device.
Sep  4 17:10:39.058994 systemd[1]: Stopped target remote-fs.target - Remote File Systems.
Sep  4 17:10:39.060728 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems.
Sep  4 17:10:39.062588 systemd[1]: Stopped target sysinit.target - System Initialization.
Sep  4 17:10:39.064854 systemd[1]: Stopped target local-fs.target - Local File Systems.
Sep  4 17:10:39.065383 systemd-networkd[761]: eth0: Gained IPv6LL
Sep  4 17:10:39.066307 systemd[1]: Stopped target swap.target - Swaps.
Sep  4 17:10:39.067995 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Sep  4 17:10:39.068110 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook.
Sep  4 17:10:39.070166 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes.
Sep  4 17:10:39.072032 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Sep  4 17:10:39.073959 systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
Sep  4 17:10:39.077378 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Sep  4 17:10:39.078547 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Sep  4 17:10:39.078672 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook.
Sep  4 17:10:39.081486 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Sep  4 17:10:39.081602 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline).
Sep  4 17:10:39.083543 systemd[1]: Stopped target paths.target - Path Units.
Sep  4 17:10:39.085117 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Sep  4 17:10:39.089410 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Sep  4 17:10:39.090655 systemd[1]: Stopped target slices.target - Slice Units.
Sep  4 17:10:39.092728 systemd[1]: Stopped target sockets.target - Socket Units.
Sep  4 17:10:39.094245 systemd[1]: iscsid.socket: Deactivated successfully.
Sep  4 17:10:39.094345 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Sep  4 17:10:39.095829 systemd[1]: iscsiuio.socket: Deactivated successfully.
Sep  4 17:10:39.095907 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Sep  4 17:10:39.097362 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Sep  4 17:10:39.097483 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion.
Sep  4 17:10:39.099213 systemd[1]: ignition-files.service: Deactivated successfully.
Sep  4 17:10:39.099307 systemd[1]: Stopped ignition-files.service - Ignition (files).
Sep  4 17:10:39.111548 systemd[1]: Stopping ignition-mount.service - Ignition (mount)...
Sep  4 17:10:39.112412 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Sep  4 17:10:39.112552 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes.
Sep  4 17:10:39.115580 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot...
Sep  4 17:10:39.117293 systemd[1]: Stopping systemd-networkd.service - Network Configuration...
Sep  4 17:10:39.119329 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution...
Sep  4 17:10:39.120855 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Sep  4 17:10:39.121045 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices.
Sep  4 17:10:39.124004 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Sep  4 17:10:39.127924 ignition[1002]: INFO     : Ignition 2.18.0
Sep  4 17:10:39.127924 ignition[1002]: INFO     : Stage: umount
Sep  4 17:10:39.127924 ignition[1002]: INFO     : no configs at "/usr/lib/ignition/base.d"
Sep  4 17:10:39.127924 ignition[1002]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Sep  4 17:10:39.127924 ignition[1002]: INFO     : umount: umount passed
Sep  4 17:10:39.127924 ignition[1002]: INFO     : Ignition finished successfully
Sep  4 17:10:39.124184 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook.
Sep  4 17:10:39.126532 systemd-networkd[761]: eth0: DHCPv6 lease lost
Sep  4 17:10:39.130034 systemd[1]: systemd-resolved.service: Deactivated successfully.
Sep  4 17:10:39.130145 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution.
Sep  4 17:10:39.134480 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Sep  4 17:10:39.135029 systemd[1]: systemd-networkd.service: Deactivated successfully.
Sep  4 17:10:39.136348 systemd[1]: Stopped systemd-networkd.service - Network Configuration.
Sep  4 17:10:39.142213 systemd[1]: ignition-mount.service: Deactivated successfully.
Sep  4 17:10:39.142295 systemd[1]: Stopped ignition-mount.service - Ignition (mount).
Sep  4 17:10:39.145703 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Sep  4 17:10:39.145795 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons.
Sep  4 17:10:39.148390 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Sep  4 17:10:39.148427 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket.
Sep  4 17:10:39.149410 systemd[1]: ignition-disks.service: Deactivated successfully.
Sep  4 17:10:39.149455 systemd[1]: Stopped ignition-disks.service - Ignition (disks).
Sep  4 17:10:39.150689 systemd[1]: ignition-kargs.service: Deactivated successfully.
Sep  4 17:10:39.150735 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs).
Sep  4 17:10:39.153064 systemd[1]: ignition-setup.service: Deactivated successfully.
Sep  4 17:10:39.153107 systemd[1]: Stopped ignition-setup.service - Ignition (setup).
Sep  4 17:10:39.154772 systemd[1]: ignition-setup-pre.service: Deactivated successfully.
Sep  4 17:10:39.154821 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup.
Sep  4 17:10:39.163425 systemd[1]: Stopping network-cleanup.service - Network Cleanup...
Sep  4 17:10:39.164798 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Sep  4 17:10:39.164856 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Sep  4 17:10:39.166721 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Sep  4 17:10:39.166767 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Sep  4 17:10:39.168443 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Sep  4 17:10:39.168500 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules.
Sep  4 17:10:39.170117 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Sep  4 17:10:39.170159 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories.
Sep  4 17:10:39.172103 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files...
Sep  4 17:10:39.182065 systemd[1]: systemd-udevd.service: Deactivated successfully.
Sep  4 17:10:39.182235 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files.
Sep  4 17:10:39.183927 systemd[1]: network-cleanup.service: Deactivated successfully.
Sep  4 17:10:39.184025 systemd[1]: Stopped network-cleanup.service - Network Cleanup.
Sep  4 17:10:39.185960 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Sep  4 17:10:39.186028 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket.
Sep  4 17:10:39.188452 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Sep  4 17:10:39.188500 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket.
Sep  4 17:10:39.190450 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Sep  4 17:10:39.190520 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook.
Sep  4 17:10:39.193209 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Sep  4 17:10:39.193254 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook.
Sep  4 17:10:39.195769 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Sep  4 17:10:39.195818 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Sep  4 17:10:39.210483 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database...
Sep  4 17:10:39.211450 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Sep  4 17:10:39.211513 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Sep  4 17:10:39.213552 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Sep  4 17:10:39.213596 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Sep  4 17:10:39.215734 systemd[1]: sysroot-boot.service: Deactivated successfully.
Sep  4 17:10:39.217343 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot.
Sep  4 17:10:39.218431 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Sep  4 17:10:39.218518 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database.
Sep  4 17:10:39.221070 systemd[1]: Reached target initrd-switch-root.target - Switch Root.
Sep  4 17:10:39.222181 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Sep  4 17:10:39.222241 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup.
Sep  4 17:10:39.234504 systemd[1]: Starting initrd-switch-root.service - Switch Root...
Sep  4 17:10:39.243457 systemd[1]: Switching root.
Sep  4 17:10:39.280273 systemd-journald[237]: Journal stopped
Sep  4 17:10:40.033661 systemd-journald[237]: Received SIGTERM from PID 1 (systemd).
Sep  4 17:10:40.033727 kernel: SELinux:  policy capability network_peer_controls=1
Sep  4 17:10:40.033741 kernel: SELinux:  policy capability open_perms=1
Sep  4 17:10:40.033751 kernel: SELinux:  policy capability extended_socket_class=1
Sep  4 17:10:40.033760 kernel: SELinux:  policy capability always_check_network=0
Sep  4 17:10:40.033773 kernel: SELinux:  policy capability cgroup_seclabel=1
Sep  4 17:10:40.033783 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Sep  4 17:10:40.033797 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Sep  4 17:10:40.033808 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Sep  4 17:10:40.033817 kernel: audit: type=1403 audit(1725469839.428:2): auid=4294967295 ses=4294967295 lsm=selinux res=1
Sep  4 17:10:40.033828 systemd[1]: Successfully loaded SELinux policy in 31.784ms.
Sep  4 17:10:40.033848 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.172ms.
Sep  4 17:10:40.033860 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Sep  4 17:10:40.033871 systemd[1]: Detected virtualization kvm.
Sep  4 17:10:40.033884 systemd[1]: Detected architecture arm64.
Sep  4 17:10:40.033895 systemd[1]: Detected first boot.
Sep  4 17:10:40.033906 systemd[1]: Initializing machine ID from VM UUID.
Sep  4 17:10:40.033917 zram_generator::config[1046]: No configuration found.
Sep  4 17:10:40.033928 systemd[1]: Populated /etc with preset unit settings.
Sep  4 17:10:40.033939 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Sep  4 17:10:40.033949 systemd[1]: Stopped initrd-switch-root.service - Switch Root.
Sep  4 17:10:40.033959 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Sep  4 17:10:40.033972 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config.
Sep  4 17:10:40.033984 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run.
Sep  4 17:10:40.033994 systemd[1]: Created slice system-getty.slice - Slice /system/getty.
Sep  4 17:10:40.034005 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe.
Sep  4 17:10:40.034015 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty.
Sep  4 17:10:40.034026 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit.
Sep  4 17:10:40.034037 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck.
Sep  4 17:10:40.034049 systemd[1]: Created slice user.slice - User and Session Slice.
Sep  4 17:10:40.034059 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Sep  4 17:10:40.034071 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Sep  4 17:10:40.034083 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch.
Sep  4 17:10:40.034093 systemd[1]: Set up automount boot.automount - Boot partition Automount Point.
Sep  4 17:10:40.034109 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point.
Sep  4 17:10:40.034130 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Sep  4 17:10:40.034140 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0...
Sep  4 17:10:40.034150 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Sep  4 17:10:40.034161 systemd[1]: Stopped target initrd-switch-root.target - Switch Root.
Sep  4 17:10:40.034171 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems.
Sep  4 17:10:40.034190 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System.
Sep  4 17:10:40.034201 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes.
Sep  4 17:10:40.034211 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Sep  4 17:10:40.034222 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Sep  4 17:10:40.034233 systemd[1]: Reached target slices.target - Slice Units.
Sep  4 17:10:40.034243 systemd[1]: Reached target swap.target - Swaps.
Sep  4 17:10:40.034255 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes.
Sep  4 17:10:40.034265 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket.
Sep  4 17:10:40.034277 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Sep  4 17:10:40.034288 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Sep  4 17:10:40.034299 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Sep  4 17:10:40.034322 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket.
Sep  4 17:10:40.034335 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System...
Sep  4 17:10:40.034345 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System...
Sep  4 17:10:40.034356 systemd[1]: Mounting media.mount - External Media Directory...
Sep  4 17:10:40.034366 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System...
Sep  4 17:10:40.034377 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System...
Sep  4 17:10:40.034389 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp...
Sep  4 17:10:40.034400 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Sep  4 17:10:40.034410 systemd[1]: Reached target machines.target - Containers.
Sep  4 17:10:40.034421 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files...
Sep  4 17:10:40.034431 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Sep  4 17:10:40.034441 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Sep  4 17:10:40.034451 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs...
Sep  4 17:10:40.034468 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Sep  4 17:10:40.034481 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Sep  4 17:10:40.034491 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Sep  4 17:10:40.034501 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse...
Sep  4 17:10:40.034511 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Sep  4 17:10:40.034522 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Sep  4 17:10:40.034532 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Sep  4 17:10:40.034544 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device.
Sep  4 17:10:40.034554 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Sep  4 17:10:40.034564 kernel: fuse: init (API version 7.39)
Sep  4 17:10:40.034575 systemd[1]: Stopped systemd-fsck-usr.service.
Sep  4 17:10:40.034586 systemd[1]: Starting systemd-journald.service - Journal Service...
Sep  4 17:10:40.034596 kernel: loop: module loaded
Sep  4 17:10:40.034606 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Sep  4 17:10:40.034615 kernel: ACPI: bus type drm_connector registered
Sep  4 17:10:40.034626 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line...
Sep  4 17:10:40.034636 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems...
Sep  4 17:10:40.034647 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Sep  4 17:10:40.034678 systemd-journald[1112]: Collecting audit messages is disabled.
Sep  4 17:10:40.034703 systemd-journald[1112]: Journal started
Sep  4 17:10:40.034723 systemd-journald[1112]: Runtime Journal (/run/log/journal/8886928be0f843bcabb1616f6057dda3) is 5.9M, max 47.3M, 41.4M free.
Sep  4 17:10:39.820481 systemd[1]: Queued start job for default target multi-user.target.
Sep  4 17:10:39.847896 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6.
Sep  4 17:10:39.849846 systemd[1]: systemd-journald.service: Deactivated successfully.
Sep  4 17:10:40.037888 systemd[1]: verity-setup.service: Deactivated successfully.
Sep  4 17:10:40.037946 systemd[1]: Stopped verity-setup.service.
Sep  4 17:10:40.041370 systemd[1]: Started systemd-journald.service - Journal Service.
Sep  4 17:10:40.042040 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System.
Sep  4 17:10:40.043280 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System.
Sep  4 17:10:40.044560 systemd[1]: Mounted media.mount - External Media Directory.
Sep  4 17:10:40.045714 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System.
Sep  4 17:10:40.046932 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System.
Sep  4 17:10:40.048228 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp.
Sep  4 17:10:40.049557 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files.
Sep  4 17:10:40.051852 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Sep  4 17:10:40.053447 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Sep  4 17:10:40.053600 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs.
Sep  4 17:10:40.055112 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Sep  4 17:10:40.055255 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Sep  4 17:10:40.056756 systemd[1]: modprobe@drm.service: Deactivated successfully.
Sep  4 17:10:40.056906 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Sep  4 17:10:40.058543 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Sep  4 17:10:40.058681 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Sep  4 17:10:40.060137 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Sep  4 17:10:40.060277 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse.
Sep  4 17:10:40.061797 systemd[1]: modprobe@loop.service: Deactivated successfully.
Sep  4 17:10:40.061935 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Sep  4 17:10:40.063404 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Sep  4 17:10:40.065016 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line.
Sep  4 17:10:40.066658 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems.
Sep  4 17:10:40.079533 systemd[1]: Reached target network-pre.target - Preparation for Network.
Sep  4 17:10:40.088439 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System...
Sep  4 17:10:40.090721 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System...
Sep  4 17:10:40.091886 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Sep  4 17:10:40.091933 systemd[1]: Reached target local-fs.target - Local File Systems.
Sep  4 17:10:40.093933 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink).
Sep  4 17:10:40.096183 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown...
Sep  4 17:10:40.098490 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache...
Sep  4 17:10:40.099633 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Sep  4 17:10:40.101249 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database...
Sep  4 17:10:40.103351 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage...
Sep  4 17:10:40.104550 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Sep  4 17:10:40.106520 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed...
Sep  4 17:10:40.107679 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Sep  4 17:10:40.110551 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Sep  4 17:10:40.116466 systemd-journald[1112]: Time spent on flushing to /var/log/journal/8886928be0f843bcabb1616f6057dda3 is 16.290ms for 852 entries.
Sep  4 17:10:40.116466 systemd-journald[1112]: System Journal (/var/log/journal/8886928be0f843bcabb1616f6057dda3) is 8.0M, max 195.6M, 187.6M free.
Sep  4 17:10:40.138234 systemd-journald[1112]: Received client request to flush runtime journal.
Sep  4 17:10:40.117550 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/...
Sep  4 17:10:40.120619 systemd[1]: Starting systemd-sysusers.service - Create System Users...
Sep  4 17:10:40.123257 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Sep  4 17:10:40.125962 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System.
Sep  4 17:10:40.127405 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System.
Sep  4 17:10:40.130436 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown.
Sep  4 17:10:40.132073 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed.
Sep  4 17:10:40.136880 systemd[1]: Reached target first-boot-complete.target - First Boot Complete.
Sep  4 17:10:40.145503 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk...
Sep  4 17:10:40.154053 kernel: loop0: detected capacity change from 0 to 113672
Sep  4 17:10:40.154242 kernel: block loop0: the capability attribute has been deprecated.
Sep  4 17:10:40.151564 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization...
Sep  4 17:10:40.153295 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage.
Sep  4 17:10:40.164871 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Sep  4 17:10:40.171188 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in.
Sep  4 17:10:40.183516 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Sep  4 17:10:40.183785 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Sep  4 17:10:40.187742 systemd[1]: Finished systemd-sysusers.service - Create System Users.
Sep  4 17:10:40.189441 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk.
Sep  4 17:10:40.201500 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Sep  4 17:10:40.209775 kernel: loop1: detected capacity change from 0 to 193208
Sep  4 17:10:40.233121 systemd-tmpfiles[1176]: ACLs are not supported, ignoring.
Sep  4 17:10:40.233140 systemd-tmpfiles[1176]: ACLs are not supported, ignoring.
Sep  4 17:10:40.237563 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Sep  4 17:10:40.255599 kernel: loop2: detected capacity change from 0 to 59688
Sep  4 17:10:40.297338 kernel: loop3: detected capacity change from 0 to 113672
Sep  4 17:10:40.307329 kernel: loop4: detected capacity change from 0 to 193208
Sep  4 17:10:40.320361 kernel: loop5: detected capacity change from 0 to 59688
Sep  4 17:10:40.326264 (sd-merge)[1181]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'.
Sep  4 17:10:40.326708 (sd-merge)[1181]: Merged extensions into '/usr'.
Sep  4 17:10:40.330635 systemd[1]: Reloading requested from client PID 1156 ('systemd-sysext') (unit systemd-sysext.service)...
Sep  4 17:10:40.330750 systemd[1]: Reloading...
Sep  4 17:10:40.378410 zram_generator::config[1205]: No configuration found.
Sep  4 17:10:40.487784 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Sep  4 17:10:40.499115 ldconfig[1151]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Sep  4 17:10:40.527578 systemd[1]: Reloading finished in 196 ms.
Sep  4 17:10:40.559786 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache.
Sep  4 17:10:40.561194 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/.
Sep  4 17:10:40.576532 systemd[1]: Starting ensure-sysext.service...
Sep  4 17:10:40.578488 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories...
Sep  4 17:10:40.592007 systemd[1]: Reloading requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)...
Sep  4 17:10:40.592024 systemd[1]: Reloading...
Sep  4 17:10:40.600991 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Sep  4 17:10:40.601594 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring.
Sep  4 17:10:40.602339 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Sep  4 17:10:40.602666 systemd-tmpfiles[1243]: ACLs are not supported, ignoring.
Sep  4 17:10:40.602819 systemd-tmpfiles[1243]: ACLs are not supported, ignoring.
Sep  4 17:10:40.604874 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot.
Sep  4 17:10:40.604977 systemd-tmpfiles[1243]: Skipping /boot
Sep  4 17:10:40.611982 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot.
Sep  4 17:10:40.612113 systemd-tmpfiles[1243]: Skipping /boot
Sep  4 17:10:40.635349 zram_generator::config[1269]: No configuration found.
Sep  4 17:10:40.720857 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Sep  4 17:10:40.759175 systemd[1]: Reloading finished in 166 ms.
Sep  4 17:10:40.776471 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database.
Sep  4 17:10:40.797813 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories.
Sep  4 17:10:40.804198 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules...
Sep  4 17:10:40.807093 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs...
Sep  4 17:10:40.809548 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog...
Sep  4 17:10:40.814602 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Sep  4 17:10:40.821704 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Sep  4 17:10:40.826570 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP...
Sep  4 17:10:40.832797 systemd[1]: Starting systemd-userdbd.service - User Database Manager...
Sep  4 17:10:40.835384 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Sep  4 17:10:40.837808 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Sep  4 17:10:40.841076 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Sep  4 17:10:40.844324 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Sep  4 17:10:40.845558 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Sep  4 17:10:40.847415 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Sep  4 17:10:40.847592 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Sep  4 17:10:40.852788 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Sep  4 17:10:40.852952 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Sep  4 17:10:40.860275 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog.
Sep  4 17:10:40.862364 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Sep  4 17:10:40.862590 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Sep  4 17:10:40.864671 systemd[1]: modprobe@loop.service: Deactivated successfully.
Sep  4 17:10:40.864818 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Sep  4 17:10:40.865348 systemd-udevd[1310]: Using default interface naming scheme 'v255'.
Sep  4 17:10:40.872949 systemd[1]: Finished ensure-sysext.service.
Sep  4 17:10:40.883728 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Sep  4 17:10:40.893799 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Sep  4 17:10:40.895092 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Sep  4 17:10:40.895159 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Sep  4 17:10:40.895215 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Sep  4 17:10:40.902806 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization...
Sep  4 17:10:40.906522 systemd[1]: Starting systemd-update-done.service - Update is Completed...
Sep  4 17:10:40.909705 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Sep  4 17:10:40.913988 systemd[1]: Started systemd-userdbd.service - User Database Manager.
Sep  4 17:10:40.917787 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs.
Sep  4 17:10:40.946369 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP.
Sep  4 17:10:40.948944 systemd[1]: modprobe@drm.service: Deactivated successfully.
Sep  4 17:10:40.949103 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Sep  4 17:10:40.955604 augenrules[1358]: No rules
Sep  4 17:10:40.957993 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules.
Sep  4 17:10:40.959916 systemd[1]: Finished systemd-update-done.service - Update is Completed.
Sep  4 17:10:40.968383 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1337)
Sep  4 17:10:40.973603 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Sep  4 17:10:40.974750 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Sep  4 17:10:40.996346 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1349)
Sep  4 17:10:41.013433 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped.
Sep  4 17:10:41.049956 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM.
Sep  4 17:10:41.055976 systemd-networkd[1373]: lo: Link UP
Sep  4 17:10:41.055991 systemd-networkd[1373]: lo: Gained carrier
Sep  4 17:10:41.056787 systemd-networkd[1373]: Enumeration completed
Sep  4 17:10:41.064525 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM...
Sep  4 17:10:41.065270 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Sep  4 17:10:41.065285 systemd-networkd[1373]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Sep  4 17:10:41.065884 systemd[1]: Started systemd-networkd.service - Network Configuration.
Sep  4 17:10:41.066153 systemd-networkd[1373]: eth0: Link UP
Sep  4 17:10:41.066157 systemd-networkd[1373]: eth0: Gained carrier
Sep  4 17:10:41.066173 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Sep  4 17:10:41.067384 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization.
Sep  4 17:10:41.068954 systemd[1]: Reached target time-set.target - System Time Set.
Sep  4 17:10:41.070415 systemd-resolved[1308]: Positive Trust Anchors:
Sep  4 17:10:41.070437 systemd-resolved[1308]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Sep  4 17:10:41.070473 systemd-resolved[1308]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test
Sep  4 17:10:41.071554 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Sep  4 17:10:41.075011 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Sep  4 17:10:41.079546 systemd-resolved[1308]: Defaulting to hostname 'linux'.
Sep  4 17:10:41.081662 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Sep  4 17:10:41.082992 systemd[1]: Reached target network.target - Network.
Sep  4 17:10:41.083989 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Sep  4 17:10:41.085418 systemd-networkd[1373]: eth0: DHCPv4 address 10.0.0.33/16, gateway 10.0.0.1 acquired from 10.0.0.1
Sep  4 17:10:41.086200 systemd-timesyncd[1334]: Network configuration changed, trying to establish connection.
Sep  4 17:10:41.089533 systemd-timesyncd[1334]: Contacted time server 10.0.0.1:123 (10.0.0.1).
Sep  4 17:10:41.089598 systemd-timesyncd[1334]: Initial clock synchronization to Wed 2024-09-04 17:10:40.868514 UTC.
Sep  4 17:10:41.096440 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM.
Sep  4 17:10:41.144950 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Sep  4 17:10:41.152891 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization.
Sep  4 17:10:41.157039 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes...
Sep  4 17:10:41.184010 lvm[1390]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Sep  4 17:10:41.207965 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Sep  4 17:10:41.216951 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes.
Sep  4 17:10:41.219138 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Sep  4 17:10:41.220512 systemd[1]: Reached target sysinit.target - System Initialization.
Sep  4 17:10:41.221774 systemd[1]: Started motdgen.path - Watch for update engine configuration changes.
Sep  4 17:10:41.223193 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data.
Sep  4 17:10:41.224875 systemd[1]: Started logrotate.timer - Daily rotation of log files.
Sep  4 17:10:41.226170 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information..
Sep  4 17:10:41.227496 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories.
Sep  4 17:10:41.228899 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Sep  4 17:10:41.228984 systemd[1]: Reached target paths.target - Path Units.
Sep  4 17:10:41.230034 systemd[1]: Reached target timers.target - Timer Units.
Sep  4 17:10:41.234973 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket.
Sep  4 17:10:41.237875 systemd[1]: Starting docker.socket - Docker Socket for the API...
Sep  4 17:10:41.255794 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket.
Sep  4 17:10:41.259801 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes...
Sep  4 17:10:41.261864 systemd[1]: Listening on docker.socket - Docker Socket for the API.
Sep  4 17:10:41.263235 systemd[1]: Reached target sockets.target - Socket Units.
Sep  4 17:10:41.264335 systemd[1]: Reached target basic.target - Basic System.
Sep  4 17:10:41.265385 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met.
Sep  4 17:10:41.265418 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met.
Sep  4 17:10:41.271160 lvm[1398]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Sep  4 17:10:41.272828 systemd[1]: Starting containerd.service - containerd container runtime...
Sep  4 17:10:41.275423 systemd[1]: Starting dbus.service - D-Bus System Message Bus...
Sep  4 17:10:41.277718 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit...
Sep  4 17:10:41.282622 systemd[1]: Starting extend-filesystems.service - Extend Filesystems...
Sep  4 17:10:41.286582 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment).
Sep  4 17:10:41.288036 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd...
Sep  4 17:10:41.291861 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin...
Sep  4 17:10:41.295464 jq[1401]: false
Sep  4 17:10:41.297161 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline...
Sep  4 17:10:41.301214 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys...
Sep  4 17:10:41.307994 systemd[1]: Starting systemd-logind.service - User Login Management...
Sep  4 17:10:41.310737 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Sep  4 17:10:41.311290 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Sep  4 17:10:41.312819 systemd[1]: Starting update-engine.service - Update Engine...
Sep  4 17:10:41.313481 extend-filesystems[1402]: Found loop3
Sep  4 17:10:41.319370 extend-filesystems[1402]: Found loop4
Sep  4 17:10:41.319370 extend-filesystems[1402]: Found loop5
Sep  4 17:10:41.319370 extend-filesystems[1402]: Found vda
Sep  4 17:10:41.319370 extend-filesystems[1402]: Found vda1
Sep  4 17:10:41.319370 extend-filesystems[1402]: Found vda2
Sep  4 17:10:41.319370 extend-filesystems[1402]: Found vda3
Sep  4 17:10:41.319370 extend-filesystems[1402]: Found usr
Sep  4 17:10:41.319370 extend-filesystems[1402]: Found vda4
Sep  4 17:10:41.319370 extend-filesystems[1402]: Found vda6
Sep  4 17:10:41.319370 extend-filesystems[1402]: Found vda7
Sep  4 17:10:41.319370 extend-filesystems[1402]: Found vda9
Sep  4 17:10:41.319370 extend-filesystems[1402]: Checking size of /dev/vda9
Sep  4 17:10:41.315729 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition...
Sep  4 17:10:41.361621 extend-filesystems[1402]: Resized partition /dev/vda9
Sep  4 17:10:41.321998 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes.
Sep  4 17:10:41.329679 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Sep  4 17:10:41.330032 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped.
Sep  4 17:10:41.366157 jq[1415]: true
Sep  4 17:10:41.333511 systemd[1]: motdgen.service: Deactivated successfully.
Sep  4 17:10:41.367747 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1350)
Sep  4 17:10:41.333728 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd.
Sep  4 17:10:41.368867 jq[1423]: true
Sep  4 17:10:41.336150 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Sep  4 17:10:41.336357 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline.
Sep  4 17:10:41.384353 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks
Sep  4 17:10:41.384465 extend-filesystems[1435]: resize2fs 1.47.0 (5-Feb-2023)
Sep  4 17:10:41.383973 dbus-daemon[1400]: [system] SELinux support is enabled
Sep  4 17:10:41.398197 tar[1421]: linux-arm64/helm
Sep  4 17:10:41.385484 (ntainerd)[1429]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR
Sep  4 17:10:41.388393 systemd[1]: Started dbus.service - D-Bus System Message Bus.
Sep  4 17:10:41.397233 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Sep  4 17:10:41.397262 systemd[1]: Reached target system-config.target - Load system-provided cloud configs.
Sep  4 17:10:41.399341 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Sep  4 17:10:41.399384 systemd[1]: Reached target user-config.target - Load user-provided cloud configs.
Sep  4 17:10:41.402530 update_engine[1414]: I0904 17:10:41.402078  1414 main.cc:92] Flatcar Update Engine starting
Sep  4 17:10:41.406706 update_engine[1414]: I0904 17:10:41.405772  1414 update_check_scheduler.cc:74] Next update check in 9m13s
Sep  4 17:10:41.406042 systemd[1]: Started update-engine.service - Update Engine.
Sep  4 17:10:41.406460 systemd-logind[1410]: Watching system buttons on /dev/input/event0 (Power Button)
Sep  4 17:10:41.407565 systemd-logind[1410]: New seat seat0.
Sep  4 17:10:41.413640 systemd[1]: Started locksmithd.service - Cluster reboot manager.
Sep  4 17:10:41.414947 systemd[1]: Started systemd-logind.service - User Login Management.
Sep  4 17:10:41.432336 kernel: EXT4-fs (vda9): resized filesystem to 1864699
Sep  4 17:10:41.462926 locksmithd[1450]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Sep  4 17:10:41.466016 extend-filesystems[1435]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required
Sep  4 17:10:41.466016 extend-filesystems[1435]: old_desc_blocks = 1, new_desc_blocks = 1
Sep  4 17:10:41.466016 extend-filesystems[1435]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long.
Sep  4 17:10:41.470200 extend-filesystems[1402]: Resized filesystem in /dev/vda9
Sep  4 17:10:41.468418 systemd[1]: extend-filesystems.service: Deactivated successfully.
Sep  4 17:10:41.470051 systemd[1]: Finished extend-filesystems.service - Extend Filesystems.
Sep  4 17:10:41.483943 bash[1454]: Updated "/home/core/.ssh/authorized_keys"
Sep  4 17:10:41.485839 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition.
Sep  4 17:10:41.487772 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met.
Sep  4 17:10:41.615908 containerd[1429]: time="2024-09-04T17:10:41.615793400Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17
Sep  4 17:10:41.640735 containerd[1429]: time="2024-09-04T17:10:41.640651040Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Sep  4 17:10:41.640832 containerd[1429]: time="2024-09-04T17:10:41.640793960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Sep  4 17:10:41.642192 containerd[1429]: time="2024-09-04T17:10:41.642145400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Sep  4 17:10:41.642192 containerd[1429]: time="2024-09-04T17:10:41.642183480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Sep  4 17:10:41.642545 containerd[1429]: time="2024-09-04T17:10:41.642512440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Sep  4 17:10:41.642545 containerd[1429]: time="2024-09-04T17:10:41.642536720Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Sep  4 17:10:41.642634 containerd[1429]: time="2024-09-04T17:10:41.642617360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Sep  4 17:10:41.642684 containerd[1429]: time="2024-09-04T17:10:41.642669000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Sep  4 17:10:41.642708 containerd[1429]: time="2024-09-04T17:10:41.642683920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Sep  4 17:10:41.642779 containerd[1429]: time="2024-09-04T17:10:41.642758640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Sep  4 17:10:41.642964 containerd[1429]: time="2024-09-04T17:10:41.642945400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Sep  4 17:10:41.642997 containerd[1429]: time="2024-09-04T17:10:41.642968680Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Sep  4 17:10:41.642997 containerd[1429]: time="2024-09-04T17:10:41.642979120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Sep  4 17:10:41.643097 containerd[1429]: time="2024-09-04T17:10:41.643080320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Sep  4 17:10:41.643097 containerd[1429]: time="2024-09-04T17:10:41.643097160Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Sep  4 17:10:41.643191 containerd[1429]: time="2024-09-04T17:10:41.643149800Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Sep  4 17:10:41.643191 containerd[1429]: time="2024-09-04T17:10:41.643166080Z" level=info msg="metadata content store policy set" policy=shared
Sep  4 17:10:41.648956 containerd[1429]: time="2024-09-04T17:10:41.648917280Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Sep  4 17:10:41.648956 containerd[1429]: time="2024-09-04T17:10:41.648954080Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Sep  4 17:10:41.649057 containerd[1429]: time="2024-09-04T17:10:41.648967160Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Sep  4 17:10:41.649057 containerd[1429]: time="2024-09-04T17:10:41.648998120Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Sep  4 17:10:41.649057 containerd[1429]: time="2024-09-04T17:10:41.649012440Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Sep  4 17:10:41.649057 containerd[1429]: time="2024-09-04T17:10:41.649022800Z" level=info msg="NRI interface is disabled by configuration."
Sep  4 17:10:41.649057 containerd[1429]: time="2024-09-04T17:10:41.649035600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Sep  4 17:10:41.649231 containerd[1429]: time="2024-09-04T17:10:41.649195560Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Sep  4 17:10:41.649231 containerd[1429]: time="2024-09-04T17:10:41.649222680Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Sep  4 17:10:41.649277 containerd[1429]: time="2024-09-04T17:10:41.649239200Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Sep  4 17:10:41.649277 containerd[1429]: time="2024-09-04T17:10:41.649254160Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Sep  4 17:10:41.649277 containerd[1429]: time="2024-09-04T17:10:41.649267560Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Sep  4 17:10:41.649349 containerd[1429]: time="2024-09-04T17:10:41.649284080Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Sep  4 17:10:41.649349 containerd[1429]: time="2024-09-04T17:10:41.649300000Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Sep  4 17:10:41.649349 containerd[1429]: time="2024-09-04T17:10:41.649331560Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Sep  4 17:10:41.649349 containerd[1429]: time="2024-09-04T17:10:41.649347280Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Sep  4 17:10:41.649498 containerd[1429]: time="2024-09-04T17:10:41.649361880Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Sep  4 17:10:41.649498 containerd[1429]: time="2024-09-04T17:10:41.649375280Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Sep  4 17:10:41.649498 containerd[1429]: time="2024-09-04T17:10:41.649387800Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Sep  4 17:10:41.649549 containerd[1429]: time="2024-09-04T17:10:41.649504080Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Sep  4 17:10:41.649752 containerd[1429]: time="2024-09-04T17:10:41.649732840Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Sep  4 17:10:41.649788 containerd[1429]: time="2024-09-04T17:10:41.649764400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Sep  4 17:10:41.649788 containerd[1429]: time="2024-09-04T17:10:41.649779240Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Sep  4 17:10:41.649830 containerd[1429]: time="2024-09-04T17:10:41.649803800Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Sep  4 17:10:41.649932 containerd[1429]: time="2024-09-04T17:10:41.649919880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Sep  4 17:10:41.649955 containerd[1429]: time="2024-09-04T17:10:41.649936400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Sep  4 17:10:41.649955 containerd[1429]: time="2024-09-04T17:10:41.649949000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Sep  4 17:10:41.649996 containerd[1429]: time="2024-09-04T17:10:41.649960920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Sep  4 17:10:41.649996 containerd[1429]: time="2024-09-04T17:10:41.649974120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Sep  4 17:10:41.649996 containerd[1429]: time="2024-09-04T17:10:41.649987800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Sep  4 17:10:41.650048 containerd[1429]: time="2024-09-04T17:10:41.649999920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Sep  4 17:10:41.650048 containerd[1429]: time="2024-09-04T17:10:41.650011280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Sep  4 17:10:41.650048 containerd[1429]: time="2024-09-04T17:10:41.650023800Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Sep  4 17:10:41.650166 containerd[1429]: time="2024-09-04T17:10:41.650148040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Sep  4 17:10:41.650212 containerd[1429]: time="2024-09-04T17:10:41.650177160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Sep  4 17:10:41.650212 containerd[1429]: time="2024-09-04T17:10:41.650197520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Sep  4 17:10:41.650247 containerd[1429]: time="2024-09-04T17:10:41.650211040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Sep  4 17:10:41.650247 containerd[1429]: time="2024-09-04T17:10:41.650223720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Sep  4 17:10:41.650247 containerd[1429]: time="2024-09-04T17:10:41.650237320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Sep  4 17:10:41.650303 containerd[1429]: time="2024-09-04T17:10:41.650251680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Sep  4 17:10:41.650303 containerd[1429]: time="2024-09-04T17:10:41.650264080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Sep  4 17:10:41.651357 containerd[1429]: time="2024-09-04T17:10:41.650609280Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Sep  4 17:10:41.651357 containerd[1429]: time="2024-09-04T17:10:41.650674640Z" level=info msg="Connect containerd service"
Sep  4 17:10:41.651357 containerd[1429]: time="2024-09-04T17:10:41.650710960Z" level=info msg="using legacy CRI server"
Sep  4 17:10:41.651357 containerd[1429]: time="2024-09-04T17:10:41.650718720Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Sep  4 17:10:41.651357 containerd[1429]: time="2024-09-04T17:10:41.650870760Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Sep  4 17:10:41.651765 containerd[1429]: time="2024-09-04T17:10:41.651724520Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Sep  4 17:10:41.651803 containerd[1429]: time="2024-09-04T17:10:41.651788800Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Sep  4 17:10:41.651823 containerd[1429]: time="2024-09-04T17:10:41.651807440Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Sep  4 17:10:41.651823 containerd[1429]: time="2024-09-04T17:10:41.651818600Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Sep  4 17:10:41.652025 containerd[1429]: time="2024-09-04T17:10:41.651926480Z" level=info msg="Start subscribing containerd event"
Sep  4 17:10:41.652058 containerd[1429]: time="2024-09-04T17:10:41.652041120Z" level=info msg="Start recovering state"
Sep  4 17:10:41.652116 containerd[1429]: time="2024-09-04T17:10:41.652103880Z" level=info msg="Start event monitor"
Sep  4 17:10:41.652142 containerd[1429]: time="2024-09-04T17:10:41.652117600Z" level=info msg="Start snapshots syncer"
Sep  4 17:10:41.652142 containerd[1429]: time="2024-09-04T17:10:41.652127000Z" level=info msg="Start cni network conf syncer for default"
Sep  4 17:10:41.652142 containerd[1429]: time="2024-09-04T17:10:41.652134360Z" level=info msg="Start streaming server"
Sep  4 17:10:41.652664 containerd[1429]: time="2024-09-04T17:10:41.652589440Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Sep  4 17:10:41.652854 containerd[1429]: time="2024-09-04T17:10:41.652833680Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Sep  4 17:10:41.652955 containerd[1429]: time="2024-09-04T17:10:41.652882240Z" level=info msg=serving... address=/run/containerd/containerd.sock
Sep  4 17:10:41.655378 systemd[1]: Started containerd.service - containerd container runtime.
Sep  4 17:10:41.657082 containerd[1429]: time="2024-09-04T17:10:41.657036200Z" level=info msg="containerd successfully booted in 0.042385s"
Sep  4 17:10:41.780724 tar[1421]: linux-arm64/LICENSE
Sep  4 17:10:41.780724 tar[1421]: linux-arm64/README.md
Sep  4 17:10:41.790629 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin.
Sep  4 17:10:41.797910 sshd_keygen[1433]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Sep  4 17:10:41.820484 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys.
Sep  4 17:10:41.838757 systemd[1]: Starting issuegen.service - Generate /run/issue...
Sep  4 17:10:41.846076 systemd[1]: issuegen.service: Deactivated successfully.
Sep  4 17:10:41.846288 systemd[1]: Finished issuegen.service - Generate /run/issue.
Sep  4 17:10:41.849335 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions...
Sep  4 17:10:41.863039 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions.
Sep  4 17:10:41.866207 systemd[1]: Started getty@tty1.service - Getty on tty1.
Sep  4 17:10:41.868671 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0.
Sep  4 17:10:41.870016 systemd[1]: Reached target getty.target - Login Prompts.
Sep  4 17:10:42.392530 systemd-networkd[1373]: eth0: Gained IPv6LL
Sep  4 17:10:42.398081 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured.
Sep  4 17:10:42.399812 systemd[1]: Reached target network-online.target - Network is Online.
Sep  4 17:10:42.419656 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent...
Sep  4 17:10:42.422402 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Sep  4 17:10:42.424647 systemd[1]: Starting nvidia.service - NVIDIA Configure Service...
Sep  4 17:10:42.443528 systemd[1]: coreos-metadata.service: Deactivated successfully.
Sep  4 17:10:42.444386 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent.
Sep  4 17:10:42.446976 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met.
Sep  4 17:10:42.451989 systemd[1]: Finished nvidia.service - NVIDIA Configure Service.
Sep  4 17:10:42.919503 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Sep  4 17:10:42.920979 systemd[1]: Reached target multi-user.target - Multi-User System.
Sep  4 17:10:42.922214 systemd[1]: Startup finished in 566ms (kernel) + 4.717s (initrd) + 3.533s (userspace) = 8.817s.
Sep  4 17:10:42.924705 (kubelet)[1515]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Sep  4 17:10:43.456535 kubelet[1515]: E0904 17:10:43.456445    1515 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Sep  4 17:10:43.459200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Sep  4 17:10:43.459377 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Sep  4 17:10:47.805630 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd.
Sep  4 17:10:47.806780 systemd[1]: Started sshd@0-10.0.0.33:22-10.0.0.1:36250.service - OpenSSH per-connection server daemon (10.0.0.1:36250).
Sep  4 17:10:47.884750 sshd[1529]: Accepted publickey for core from 10.0.0.1 port 36250 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA
Sep  4 17:10:47.886422 sshd[1529]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:10:47.893671 systemd[1]: Created slice user-500.slice - User Slice of UID 500.
Sep  4 17:10:47.910633 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500...
Sep  4 17:10:47.914524 systemd-logind[1410]: New session 1 of user core.
Sep  4 17:10:47.924161 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500.
Sep  4 17:10:47.927089 systemd[1]: Starting user@500.service - User Manager for UID 500...
Sep  4 17:10:47.954555 (systemd)[1533]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:10:48.038014 systemd[1533]: Queued start job for default target default.target.
Sep  4 17:10:48.060673 systemd[1533]: Created slice app.slice - User Application Slice.
Sep  4 17:10:48.060701 systemd[1533]: Reached target paths.target - Paths.
Sep  4 17:10:48.060713 systemd[1533]: Reached target timers.target - Timers.
Sep  4 17:10:48.061892 systemd[1533]: Starting dbus.socket - D-Bus User Message Bus Socket...
Sep  4 17:10:48.071338 systemd[1533]: Listening on dbus.socket - D-Bus User Message Bus Socket.
Sep  4 17:10:48.071395 systemd[1533]: Reached target sockets.target - Sockets.
Sep  4 17:10:48.071406 systemd[1533]: Reached target basic.target - Basic System.
Sep  4 17:10:48.071438 systemd[1533]: Reached target default.target - Main User Target.
Sep  4 17:10:48.071463 systemd[1533]: Startup finished in 110ms.
Sep  4 17:10:48.071742 systemd[1]: Started user@500.service - User Manager for UID 500.
Sep  4 17:10:48.073091 systemd[1]: Started session-1.scope - Session 1 of User core.
Sep  4 17:10:48.134655 systemd[1]: Started sshd@1-10.0.0.33:22-10.0.0.1:36266.service - OpenSSH per-connection server daemon (10.0.0.1:36266).
Sep  4 17:10:48.174043 sshd[1544]: Accepted publickey for core from 10.0.0.1 port 36266 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA
Sep  4 17:10:48.175228 sshd[1544]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:10:48.179124 systemd-logind[1410]: New session 2 of user core.
Sep  4 17:10:48.186524 systemd[1]: Started session-2.scope - Session 2 of User core.
Sep  4 17:10:48.239158 sshd[1544]: pam_unix(sshd:session): session closed for user core
Sep  4 17:10:48.248587 systemd[1]: sshd@1-10.0.0.33:22-10.0.0.1:36266.service: Deactivated successfully.
Sep  4 17:10:48.249883 systemd[1]: session-2.scope: Deactivated successfully.
Sep  4 17:10:48.252421 systemd-logind[1410]: Session 2 logged out. Waiting for processes to exit.
Sep  4 17:10:48.253563 systemd[1]: Started sshd@2-10.0.0.33:22-10.0.0.1:36282.service - OpenSSH per-connection server daemon (10.0.0.1:36282).
Sep  4 17:10:48.254279 systemd-logind[1410]: Removed session 2.
Sep  4 17:10:48.293331 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 36282 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA
Sep  4 17:10:48.294515 sshd[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:10:48.298805 systemd-logind[1410]: New session 3 of user core.
Sep  4 17:10:48.316505 systemd[1]: Started session-3.scope - Session 3 of User core.
Sep  4 17:10:48.367615 sshd[1551]: pam_unix(sshd:session): session closed for user core
Sep  4 17:10:48.381836 systemd[1]: sshd@2-10.0.0.33:22-10.0.0.1:36282.service: Deactivated successfully.
Sep  4 17:10:48.384678 systemd[1]: session-3.scope: Deactivated successfully.
Sep  4 17:10:48.386028 systemd-logind[1410]: Session 3 logged out. Waiting for processes to exit.
Sep  4 17:10:48.387400 systemd[1]: Started sshd@3-10.0.0.33:22-10.0.0.1:36284.service - OpenSSH per-connection server daemon (10.0.0.1:36284).
Sep  4 17:10:48.388186 systemd-logind[1410]: Removed session 3.
Sep  4 17:10:48.431186 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 36284 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA
Sep  4 17:10:48.432591 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:10:48.436597 systemd-logind[1410]: New session 4 of user core.
Sep  4 17:10:48.446464 systemd[1]: Started session-4.scope - Session 4 of User core.
Sep  4 17:10:48.498142 sshd[1558]: pam_unix(sshd:session): session closed for user core
Sep  4 17:10:48.506884 systemd[1]: sshd@3-10.0.0.33:22-10.0.0.1:36284.service: Deactivated successfully.
Sep  4 17:10:48.510017 systemd[1]: session-4.scope: Deactivated successfully.
Sep  4 17:10:48.512358 systemd-logind[1410]: Session 4 logged out. Waiting for processes to exit.
Sep  4 17:10:48.520939 systemd[1]: Started sshd@4-10.0.0.33:22-10.0.0.1:36290.service - OpenSSH per-connection server daemon (10.0.0.1:36290).
Sep  4 17:10:48.521958 systemd-logind[1410]: Removed session 4.
Sep  4 17:10:48.558968 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 36290 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA
Sep  4 17:10:48.560527 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:10:48.566110 systemd-logind[1410]: New session 5 of user core.
Sep  4 17:10:48.578492 systemd[1]: Started session-5.scope - Session 5 of User core.
Sep  4 17:10:48.647488 sudo[1568]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1
Sep  4 17:10:48.651856 sudo[1568]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Sep  4 17:10:48.671441 sudo[1568]: pam_unix(sudo:session): session closed for user root
Sep  4 17:10:48.674610 sshd[1565]: pam_unix(sshd:session): session closed for user core
Sep  4 17:10:48.683118 systemd[1]: sshd@4-10.0.0.33:22-10.0.0.1:36290.service: Deactivated successfully.
Sep  4 17:10:48.685715 systemd[1]: session-5.scope: Deactivated successfully.
Sep  4 17:10:48.690785 systemd-logind[1410]: Session 5 logged out. Waiting for processes to exit.
Sep  4 17:10:48.691676 systemd[1]: Started sshd@5-10.0.0.33:22-10.0.0.1:36296.service - OpenSSH per-connection server daemon (10.0.0.1:36296).
Sep  4 17:10:48.694578 systemd-logind[1410]: Removed session 5.
Sep  4 17:10:48.736055 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 36296 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA
Sep  4 17:10:48.736606 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:10:48.741643 systemd-logind[1410]: New session 6 of user core.
Sep  4 17:10:48.750490 systemd[1]: Started session-6.scope - Session 6 of User core.
Sep  4 17:10:48.810016 sudo[1577]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules
Sep  4 17:10:48.810770 sudo[1577]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Sep  4 17:10:48.814680 sudo[1577]: pam_unix(sudo:session): session closed for user root
Sep  4 17:10:48.819190 sudo[1576]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules
Sep  4 17:10:48.819436 sudo[1576]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Sep  4 17:10:48.841628 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules...
Sep  4 17:10:48.843390 auditctl[1580]: No rules
Sep  4 17:10:48.844245 systemd[1]: audit-rules.service: Deactivated successfully.
Sep  4 17:10:48.844484 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules.
Sep  4 17:10:48.847769 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules...
Sep  4 17:10:48.879775 augenrules[1598]: No rules
Sep  4 17:10:48.883410 sudo[1576]: pam_unix(sudo:session): session closed for user root
Sep  4 17:10:48.880974 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules.
Sep  4 17:10:48.885561 sshd[1573]: pam_unix(sshd:session): session closed for user core
Sep  4 17:10:48.895084 systemd[1]: sshd@5-10.0.0.33:22-10.0.0.1:36296.service: Deactivated successfully.
Sep  4 17:10:48.898504 systemd[1]: session-6.scope: Deactivated successfully.
Sep  4 17:10:48.902843 systemd-logind[1410]: Session 6 logged out. Waiting for processes to exit.
Sep  4 17:10:48.906926 systemd-logind[1410]: Removed session 6.
Sep  4 17:10:48.924097 systemd[1]: Started sshd@6-10.0.0.33:22-10.0.0.1:36300.service - OpenSSH per-connection server daemon (10.0.0.1:36300).
Sep  4 17:10:48.967187 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 36300 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA
Sep  4 17:10:48.968594 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:10:48.974722 systemd-logind[1410]: New session 7 of user core.
Sep  4 17:10:48.987585 systemd[1]: Started session-7.scope - Session 7 of User core.
Sep  4 17:10:49.042613 sudo[1609]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Sep  4 17:10:49.042861 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Sep  4 17:10:49.174742 (dockerd)[1620]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU
Sep  4 17:10:49.175259 systemd[1]: Starting docker.service - Docker Application Container Engine...
Sep  4 17:10:49.425378 dockerd[1620]: time="2024-09-04T17:10:49.424939133Z" level=info msg="Starting up"
Sep  4 17:10:50.174008 dockerd[1620]: time="2024-09-04T17:10:50.173957816Z" level=info msg="Loading containers: start."
Sep  4 17:10:50.297330 kernel: Initializing XFRM netlink socket
Sep  4 17:10:50.365605 systemd-networkd[1373]: docker0: Link UP
Sep  4 17:10:50.411108 dockerd[1620]: time="2024-09-04T17:10:50.410643184Z" level=info msg="Loading containers: done."
Sep  4 17:10:50.463246 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3595753743-merged.mount: Deactivated successfully.
Sep  4 17:10:50.467337 dockerd[1620]: time="2024-09-04T17:10:50.467073272Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2
Sep  4 17:10:50.467337 dockerd[1620]: time="2024-09-04T17:10:50.467260043Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9
Sep  4 17:10:50.467678 dockerd[1620]: time="2024-09-04T17:10:50.467400815Z" level=info msg="Daemon has completed initialization"
Sep  4 17:10:50.494344 systemd[1]: Started docker.service - Docker Application Container Engine.
Sep  4 17:10:50.494525 dockerd[1620]: time="2024-09-04T17:10:50.494427423Z" level=info msg="API listen on /run/docker.sock"
Sep  4 17:10:51.102676 containerd[1429]: time="2024-09-04T17:10:51.102619020Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.13\""
Sep  4 17:10:51.839487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount586279943.mount: Deactivated successfully.
Sep  4 17:10:53.550185 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Sep  4 17:10:53.560579 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Sep  4 17:10:53.656258 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Sep  4 17:10:53.661016 (kubelet)[1826]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Sep  4 17:10:53.710576 kubelet[1826]: E0904 17:10:53.710513    1826 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Sep  4 17:10:53.714264 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Sep  4 17:10:53.714479 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Sep  4 17:10:53.820676 containerd[1429]: time="2024-09-04T17:10:53.820547069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.13\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:10:53.821729 containerd[1429]: time="2024-09-04T17:10:53.821679982Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.13: active requests=0, bytes read=31599024"
Sep  4 17:10:53.822505 containerd[1429]: time="2024-09-04T17:10:53.822468299Z" level=info msg="ImageCreate event name:\"sha256:a339bb1c702d4062f524851aa528a3feed19ee9f717d14911cc30771e13491ea\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:10:53.825991 containerd[1429]: time="2024-09-04T17:10:53.825433731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7d2c9256ad576a0b3745b749efe7f4fa8b276ec7ef448fc0f45794ca78eb8625\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:10:53.826711 containerd[1429]: time="2024-09-04T17:10:53.826677535Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.13\" with image id \"sha256:a339bb1c702d4062f524851aa528a3feed19ee9f717d14911cc30771e13491ea\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7d2c9256ad576a0b3745b749efe7f4fa8b276ec7ef448fc0f45794ca78eb8625\", size \"31595822\" in 2.724006678s"
Sep  4 17:10:53.826923 containerd[1429]: time="2024-09-04T17:10:53.826811438Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.13\" returns image reference \"sha256:a339bb1c702d4062f524851aa528a3feed19ee9f717d14911cc30771e13491ea\""
Sep  4 17:10:53.847884 containerd[1429]: time="2024-09-04T17:10:53.847612225Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.13\""
Sep  4 17:10:55.729570 containerd[1429]: time="2024-09-04T17:10:55.729118822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.13\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:10:55.729951 containerd[1429]: time="2024-09-04T17:10:55.729913456Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.13: active requests=0, bytes read=29019498"
Sep  4 17:10:55.730434 containerd[1429]: time="2024-09-04T17:10:55.730404176Z" level=info msg="ImageCreate event name:\"sha256:1e81172b17d2d45f9e0ff1ac37a042d34a1be80722b8c8bcab67d9250065fa6d\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:10:55.733360 containerd[1429]: time="2024-09-04T17:10:55.733325212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e7b44c1741fe1802d159ffdbd0d1f78d48a4185d7fb1cdf8a112fbb50696f7e1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:10:55.734605 containerd[1429]: time="2024-09-04T17:10:55.734572194Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.13\" with image id \"sha256:1e81172b17d2d45f9e0ff1ac37a042d34a1be80722b8c8bcab67d9250065fa6d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e7b44c1741fe1802d159ffdbd0d1f78d48a4185d7fb1cdf8a112fbb50696f7e1\", size \"30506763\" in 1.886918734s"
Sep  4 17:10:55.734605 containerd[1429]: time="2024-09-04T17:10:55.734608537Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.13\" returns image reference \"sha256:1e81172b17d2d45f9e0ff1ac37a042d34a1be80722b8c8bcab67d9250065fa6d\""
Sep  4 17:10:55.753956 containerd[1429]: time="2024-09-04T17:10:55.753915962Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.13\""
Sep  4 17:10:56.895981 containerd[1429]: time="2024-09-04T17:10:56.895933452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.13\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:10:56.898890 containerd[1429]: time="2024-09-04T17:10:56.898620283Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.13: active requests=0, bytes read=15533683"
Sep  4 17:10:56.899569 containerd[1429]: time="2024-09-04T17:10:56.899535450Z" level=info msg="ImageCreate event name:\"sha256:42bbd5a6799fefc25b4b3269d8ad07628893c29d7b26d8fab57f6785b976ec7a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:10:56.902429 containerd[1429]: time="2024-09-04T17:10:56.902398049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:efeb791718f4b9c62bd683f5b403da520f3651cb36ad9f800e0f98b595beafa4\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:10:56.904256 containerd[1429]: time="2024-09-04T17:10:56.904212649Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.13\" with image id \"sha256:42bbd5a6799fefc25b4b3269d8ad07628893c29d7b26d8fab57f6785b976ec7a\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:efeb791718f4b9c62bd683f5b403da520f3651cb36ad9f800e0f98b595beafa4\", size \"17020966\" in 1.150260458s"
Sep  4 17:10:56.904256 containerd[1429]: time="2024-09-04T17:10:56.904252837Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.13\" returns image reference \"sha256:42bbd5a6799fefc25b4b3269d8ad07628893c29d7b26d8fab57f6785b976ec7a\""
Sep  4 17:10:56.924741 containerd[1429]: time="2024-09-04T17:10:56.924706016Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.13\""
Sep  4 17:10:58.024364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2061605951.mount: Deactivated successfully.
Sep  4 17:10:58.700044 containerd[1429]: time="2024-09-04T17:10:58.699986068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.13\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:10:58.700619 containerd[1429]: time="2024-09-04T17:10:58.700581002Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.13: active requests=0, bytes read=24977932"
Sep  4 17:10:58.701284 containerd[1429]: time="2024-09-04T17:10:58.701241163Z" level=info msg="ImageCreate event name:\"sha256:28cc84306a40b12ede33c1df2d3219e0061b4d0e5309eb874034dd77e9154393\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:10:58.703290 containerd[1429]: time="2024-09-04T17:10:58.703257249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:537633f399f87ce85d44fc8471ece97a83632198f99b3f7e08770beca95e9fa1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:10:58.703965 containerd[1429]: time="2024-09-04T17:10:58.703929690Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.13\" with image id \"sha256:28cc84306a40b12ede33c1df2d3219e0061b4d0e5309eb874034dd77e9154393\", repo tag \"registry.k8s.io/kube-proxy:v1.28.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:537633f399f87ce85d44fc8471ece97a83632198f99b3f7e08770beca95e9fa1\", size \"24976949\" in 1.779182159s"
Sep  4 17:10:58.704032 containerd[1429]: time="2024-09-04T17:10:58.703975022Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.13\" returns image reference \"sha256:28cc84306a40b12ede33c1df2d3219e0061b4d0e5309eb874034dd77e9154393\""
Sep  4 17:10:58.722488 containerd[1429]: time="2024-09-04T17:10:58.722455339Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\""
Sep  4 17:10:59.385712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount57660062.mount: Deactivated successfully.
Sep  4 17:10:59.390877 containerd[1429]: time="2024-09-04T17:10:59.390528440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:10:59.391713 containerd[1429]: time="2024-09-04T17:10:59.391520681Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823"
Sep  4 17:10:59.392409 containerd[1429]: time="2024-09-04T17:10:59.392383094Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:10:59.395644 containerd[1429]: time="2024-09-04T17:10:59.395609225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:10:59.396338 containerd[1429]: time="2024-09-04T17:10:59.396236032Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 673.743331ms"
Sep  4 17:10:59.396338 containerd[1429]: time="2024-09-04T17:10:59.396266465Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\""
Sep  4 17:10:59.414571 containerd[1429]: time="2024-09-04T17:10:59.414537238Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\""
Sep  4 17:10:59.989978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount360070666.mount: Deactivated successfully.
Sep  4 17:11:02.447187 containerd[1429]: time="2024-09-04T17:11:02.447073781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:11:02.461141 containerd[1429]: time="2024-09-04T17:11:02.461074495Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788"
Sep  4 17:11:02.474882 containerd[1429]: time="2024-09-04T17:11:02.474843333Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:11:02.530255 containerd[1429]: time="2024-09-04T17:11:02.530189407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:11:02.531046 containerd[1429]: time="2024-09-04T17:11:02.531003208Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.116428707s"
Sep  4 17:11:02.531046 containerd[1429]: time="2024-09-04T17:11:02.531039180Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\""
Sep  4 17:11:02.549817 containerd[1429]: time="2024-09-04T17:11:02.549635455Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\""
Sep  4 17:11:03.232124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1223861019.mount: Deactivated successfully.
Sep  4 17:11:03.621719 containerd[1429]: time="2024-09-04T17:11:03.621645538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:11:03.622860 containerd[1429]: time="2024-09-04T17:11:03.622554256Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=14558464"
Sep  4 17:11:03.623611 containerd[1429]: time="2024-09-04T17:11:03.623570155Z" level=info msg="ImageCreate event name:\"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:11:03.627123 containerd[1429]: time="2024-09-04T17:11:03.627042060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:11:03.628124 containerd[1429]: time="2024-09-04T17:11:03.627764569Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"14557471\" in 1.078082161s"
Sep  4 17:11:03.628124 containerd[1429]: time="2024-09-04T17:11:03.627815844Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\""
Sep  4 17:11:03.800210 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
Sep  4 17:11:03.811521 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Sep  4 17:11:03.922738 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Sep  4 17:11:03.927561 (kubelet)[1981]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Sep  4 17:11:03.970735 kubelet[1981]: E0904 17:11:03.970692    1981 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Sep  4 17:11:03.973114 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Sep  4 17:11:03.973246 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Sep  4 17:11:08.174249 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Sep  4 17:11:08.194579 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Sep  4 17:11:08.213297 systemd[1]: Reloading requested from client PID 2043 ('systemctl') (unit session-7.scope)...
Sep  4 17:11:08.213331 systemd[1]: Reloading...
Sep  4 17:11:08.282638 zram_generator::config[2080]: No configuration found.
Sep  4 17:11:08.416835 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Sep  4 17:11:08.471272 systemd[1]: Reloading finished in 257 ms.
Sep  4 17:11:08.527097 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM
Sep  4 17:11:08.527214 systemd[1]: kubelet.service: Failed with result 'signal'.
Sep  4 17:11:08.527508 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Sep  4 17:11:08.531653 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Sep  4 17:11:08.631293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Sep  4 17:11:08.636190 (kubelet)[2126]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Sep  4 17:11:08.675618 kubelet[2126]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Sep  4 17:11:08.675618 kubelet[2126]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Sep  4 17:11:08.675618 kubelet[2126]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Sep  4 17:11:08.675953 kubelet[2126]: I0904 17:11:08.675666    2126 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Sep  4 17:11:10.018698 kubelet[2126]: I0904 17:11:10.018658    2126 server.go:467] "Kubelet version" kubeletVersion="v1.28.7"
Sep  4 17:11:10.018698 kubelet[2126]: I0904 17:11:10.018688    2126 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Sep  4 17:11:10.019033 kubelet[2126]: I0904 17:11:10.018898    2126 server.go:895] "Client rotation is on, will bootstrap in background"
Sep  4 17:11:10.043352 kubelet[2126]: I0904 17:11:10.043321    2126 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Sep  4 17:11:10.044293 kubelet[2126]: E0904 17:11:10.044274    2126 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.33:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.33:6443: connect: connection refused
Sep  4 17:11:10.050959 kubelet[2126]: W0904 17:11:10.050814    2126 machine.go:65] Cannot read vendor id correctly, set empty.
Sep  4 17:11:10.051583 kubelet[2126]: I0904 17:11:10.051562    2126 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Sep  4 17:11:10.051773 kubelet[2126]: I0904 17:11:10.051764    2126 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Sep  4 17:11:10.051950 kubelet[2126]: I0904 17:11:10.051927    2126 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Sep  4 17:11:10.052049 kubelet[2126]: I0904 17:11:10.051955    2126 topology_manager.go:138] "Creating topology manager with none policy"
Sep  4 17:11:10.052049 kubelet[2126]: I0904 17:11:10.051963    2126 container_manager_linux.go:301] "Creating device plugin manager"
Sep  4 17:11:10.052145 kubelet[2126]: I0904 17:11:10.052131    2126 state_mem.go:36] "Initialized new in-memory state store"
Sep  4 17:11:10.053963 kubelet[2126]: I0904 17:11:10.053935    2126 kubelet.go:393] "Attempting to sync node with API server"
Sep  4 17:11:10.053963 kubelet[2126]: I0904 17:11:10.053966    2126 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests"
Sep  4 17:11:10.054075 kubelet[2126]: I0904 17:11:10.054057    2126 kubelet.go:309] "Adding apiserver pod source"
Sep  4 17:11:10.054075 kubelet[2126]: I0904 17:11:10.054072    2126 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Sep  4 17:11:10.055305 kubelet[2126]: W0904 17:11:10.055254    2126 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.33:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused
Sep  4 17:11:10.055386 kubelet[2126]: E0904 17:11:10.055326    2126 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.33:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused
Sep  4 17:11:10.055467 kubelet[2126]: W0904 17:11:10.055436    2126 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused
Sep  4 17:11:10.055504 kubelet[2126]: E0904 17:11:10.055473    2126 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused
Sep  4 17:11:10.057634 kubelet[2126]: I0904 17:11:10.057610    2126 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1"
Sep  4 17:11:10.060056 kubelet[2126]: W0904 17:11:10.059714    2126 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Sep  4 17:11:10.065471 kubelet[2126]: I0904 17:11:10.065451    2126 server.go:1232] "Started kubelet"
Sep  4 17:11:10.069322 kubelet[2126]: I0904 17:11:10.065495    2126 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Sep  4 17:11:10.069322 kubelet[2126]: I0904 17:11:10.067467    2126 volume_manager.go:291] "Starting Kubelet Volume Manager"
Sep  4 17:11:10.069322 kubelet[2126]: I0904 17:11:10.066675    2126 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Sep  4 17:11:10.069322 kubelet[2126]: I0904 17:11:10.068470    2126 server.go:462] "Adding debug handlers to kubelet server"
Sep  4 17:11:10.070119 kubelet[2126]: I0904 17:11:10.070084    2126 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Sep  4 17:11:10.070253 kubelet[2126]: E0904 17:11:10.070236    2126 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="200ms"
Sep  4 17:11:10.070406 kubelet[2126]: I0904 17:11:10.070395    2126 reconciler_new.go:29] "Reconciler: start to sync state"
Sep  4 17:11:10.070541 kubelet[2126]: W0904 17:11:10.070394    2126 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused
Sep  4 17:11:10.070699 kubelet[2126]: E0904 17:11:10.070686    2126 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused
Sep  4 17:11:10.070798 kubelet[2126]: I0904 17:11:10.066810    2126 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
Sep  4 17:11:10.071067 kubelet[2126]: I0904 17:11:10.071053    2126 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Sep  4 17:11:10.071170 kubelet[2126]: E0904 17:11:10.066757    2126 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
Sep  4 17:11:10.071262 kubelet[2126]: E0904 17:11:10.071251    2126 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Sep  4 17:11:10.071624 kubelet[2126]: E0904 17:11:10.071517    2126 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17f219b473d0d357", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.September, 4, 17, 11, 10, 64194391, time.Local), LastTimestamp:time.Date(2024, time.September, 4, 17, 11, 10, 64194391, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.33:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.33:6443: connect: connection refused'(may retry after sleeping)
Sep  4 17:11:10.086327 kubelet[2126]: I0904 17:11:10.083820    2126 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Sep  4 17:11:10.086327 kubelet[2126]: I0904 17:11:10.084869    2126 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Sep  4 17:11:10.086327 kubelet[2126]: I0904 17:11:10.084891    2126 status_manager.go:217] "Starting to sync pod status with apiserver"
Sep  4 17:11:10.086327 kubelet[2126]: I0904 17:11:10.084907    2126 kubelet.go:2303] "Starting kubelet main sync loop"
Sep  4 17:11:10.086327 kubelet[2126]: E0904 17:11:10.084956    2126 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Sep  4 17:11:10.086327 kubelet[2126]: W0904 17:11:10.085424    2126 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused
Sep  4 17:11:10.086327 kubelet[2126]: E0904 17:11:10.085449    2126 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused
Sep  4 17:11:10.094851 kubelet[2126]: I0904 17:11:10.094830    2126 cpu_manager.go:214] "Starting CPU manager" policy="none"
Sep  4 17:11:10.095042 kubelet[2126]: I0904 17:11:10.095015    2126 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Sep  4 17:11:10.095109 kubelet[2126]: I0904 17:11:10.095101    2126 state_mem.go:36] "Initialized new in-memory state store"
Sep  4 17:11:10.097223 kubelet[2126]: I0904 17:11:10.097201    2126 policy_none.go:49] "None policy: Start"
Sep  4 17:11:10.097829 kubelet[2126]: I0904 17:11:10.097800    2126 memory_manager.go:169] "Starting memorymanager" policy="None"
Sep  4 17:11:10.097995 kubelet[2126]: I0904 17:11:10.097984    2126 state_mem.go:35] "Initializing new in-memory state store"
Sep  4 17:11:10.104518 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice.
Sep  4 17:11:10.114762 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice.
Sep  4 17:11:10.117689 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice.
Sep  4 17:11:10.131994 kubelet[2126]: I0904 17:11:10.131968    2126 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Sep  4 17:11:10.132254 kubelet[2126]: I0904 17:11:10.132231    2126 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Sep  4 17:11:10.133189 kubelet[2126]: E0904 17:11:10.133170    2126 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found"
Sep  4 17:11:10.170800 kubelet[2126]: I0904 17:11:10.170767    2126 kubelet_node_status.go:70] "Attempting to register node" node="localhost"
Sep  4 17:11:10.171206 kubelet[2126]: E0904 17:11:10.171186    2126 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost"
Sep  4 17:11:10.185603 kubelet[2126]: I0904 17:11:10.185582    2126 topology_manager.go:215] "Topology Admit Handler" podUID="f5bf8d52acd7337c82951a97b42c345d" podNamespace="kube-system" podName="kube-controller-manager-localhost"
Sep  4 17:11:10.188132 kubelet[2126]: I0904 17:11:10.186897    2126 topology_manager.go:215] "Topology Admit Handler" podUID="cacd2a680dbc59f99275412e0ba6e38b" podNamespace="kube-system" podName="kube-scheduler-localhost"
Sep  4 17:11:10.191370 kubelet[2126]: I0904 17:11:10.191343    2126 topology_manager.go:215] "Topology Admit Handler" podUID="c478d1607a2d82c96996df8e82b907c7" podNamespace="kube-system" podName="kube-apiserver-localhost"
Sep  4 17:11:10.197351 systemd[1]: Created slice kubepods-burstable-podcacd2a680dbc59f99275412e0ba6e38b.slice - libcontainer container kubepods-burstable-podcacd2a680dbc59f99275412e0ba6e38b.slice.
Sep  4 17:11:10.212732 systemd[1]: Created slice kubepods-burstable-podf5bf8d52acd7337c82951a97b42c345d.slice - libcontainer container kubepods-burstable-podf5bf8d52acd7337c82951a97b42c345d.slice.
Sep  4 17:11:10.216715 systemd[1]: Created slice kubepods-burstable-podc478d1607a2d82c96996df8e82b907c7.slice - libcontainer container kubepods-burstable-podc478d1607a2d82c96996df8e82b907c7.slice.
Sep  4 17:11:10.271144 kubelet[2126]: I0904 17:11:10.271024    2126 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost"
Sep  4 17:11:10.271144 kubelet[2126]: I0904 17:11:10.271081    2126 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost"
Sep  4 17:11:10.271144 kubelet[2126]: E0904 17:11:10.271101    2126 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="400ms"
Sep  4 17:11:10.371469 kubelet[2126]: I0904 17:11:10.371422    2126 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c478d1607a2d82c96996df8e82b907c7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c478d1607a2d82c96996df8e82b907c7\") " pod="kube-system/kube-apiserver-localhost"
Sep  4 17:11:10.371599 kubelet[2126]: I0904 17:11:10.371503    2126 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost"
Sep  4 17:11:10.371599 kubelet[2126]: I0904 17:11:10.371529    2126 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost"
Sep  4 17:11:10.371599 kubelet[2126]: I0904 17:11:10.371551    2126 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cacd2a680dbc59f99275412e0ba6e38b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"cacd2a680dbc59f99275412e0ba6e38b\") " pod="kube-system/kube-scheduler-localhost"
Sep  4 17:11:10.371599 kubelet[2126]: I0904 17:11:10.371572    2126 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c478d1607a2d82c96996df8e82b907c7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c478d1607a2d82c96996df8e82b907c7\") " pod="kube-system/kube-apiserver-localhost"
Sep  4 17:11:10.371599 kubelet[2126]: I0904 17:11:10.371597    2126 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c478d1607a2d82c96996df8e82b907c7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c478d1607a2d82c96996df8e82b907c7\") " pod="kube-system/kube-apiserver-localhost"
Sep  4 17:11:10.371711 kubelet[2126]: I0904 17:11:10.371627    2126 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost"
Sep  4 17:11:10.372697 kubelet[2126]: I0904 17:11:10.372376    2126 kubelet_node_status.go:70] "Attempting to register node" node="localhost"
Sep  4 17:11:10.372754 kubelet[2126]: E0904 17:11:10.372729    2126 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost"
Sep  4 17:11:10.513010 kubelet[2126]: E0904 17:11:10.512970    2126 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:10.513655 containerd[1429]: time="2024-09-04T17:11:10.513619135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:cacd2a680dbc59f99275412e0ba6e38b,Namespace:kube-system,Attempt:0,}"
Sep  4 17:11:10.514794 kubelet[2126]: E0904 17:11:10.514776    2126 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:10.515632 containerd[1429]: time="2024-09-04T17:11:10.515392330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f5bf8d52acd7337c82951a97b42c345d,Namespace:kube-system,Attempt:0,}"
Sep  4 17:11:10.519805 kubelet[2126]: E0904 17:11:10.519781    2126 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:10.520453 containerd[1429]: time="2024-09-04T17:11:10.520186021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c478d1607a2d82c96996df8e82b907c7,Namespace:kube-system,Attempt:0,}"
Sep  4 17:11:10.672936 kubelet[2126]: E0904 17:11:10.672900    2126 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="800ms"
Sep  4 17:11:10.774799 kubelet[2126]: I0904 17:11:10.774769    2126 kubelet_node_status.go:70] "Attempting to register node" node="localhost"
Sep  4 17:11:10.775082 kubelet[2126]: E0904 17:11:10.775067    2126 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost"
Sep  4 17:11:10.930359 kubelet[2126]: W0904 17:11:10.930217    2126 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused
Sep  4 17:11:10.930359 kubelet[2126]: E0904 17:11:10.930281    2126 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused
Sep  4 17:11:11.091788 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1130407357.mount: Deactivated successfully.
Sep  4 17:11:11.096221 containerd[1429]: time="2024-09-04T17:11:11.096171151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Sep  4 17:11:11.097089 containerd[1429]: time="2024-09-04T17:11:11.097052684Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Sep  4 17:11:11.097737 containerd[1429]: time="2024-09-04T17:11:11.097674447Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Sep  4 17:11:11.099453 containerd[1429]: time="2024-09-04T17:11:11.099375269Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Sep  4 17:11:11.106524 containerd[1429]: time="2024-09-04T17:11:11.106470590Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Sep  4 17:11:11.107380 containerd[1429]: time="2024-09-04T17:11:11.107338371Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Sep  4 17:11:11.110654 containerd[1429]: time="2024-09-04T17:11:11.110607812Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175"
Sep  4 17:11:11.112347 containerd[1429]: time="2024-09-04T17:11:11.111799727Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Sep  4 17:11:11.112607 containerd[1429]: time="2024-09-04T17:11:11.112578359Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 598.859489ms"
Sep  4 17:11:11.115161 containerd[1429]: time="2024-09-04T17:11:11.114938722Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 599.467203ms"
Sep  4 17:11:11.116173 containerd[1429]: time="2024-09-04T17:11:11.116101893Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 595.836804ms"
Sep  4 17:11:11.290859 containerd[1429]: time="2024-09-04T17:11:11.290528460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 17:11:11.290859 containerd[1429]: time="2024-09-04T17:11:11.290668819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:11:11.290859 containerd[1429]: time="2024-09-04T17:11:11.290693765Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 17:11:11.292215 containerd[1429]: time="2024-09-04T17:11:11.292107872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:11:11.293464 containerd[1429]: time="2024-09-04T17:11:11.293125087Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 17:11:11.293464 containerd[1429]: time="2024-09-04T17:11:11.293179136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:11:11.293464 containerd[1429]: time="2024-09-04T17:11:11.293197366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 17:11:11.293464 containerd[1429]: time="2024-09-04T17:11:11.293211158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:11:11.297214 containerd[1429]: time="2024-09-04T17:11:11.297102441Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 17:11:11.297338 containerd[1429]: time="2024-09-04T17:11:11.297189431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:11:11.297338 containerd[1429]: time="2024-09-04T17:11:11.297222972Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 17:11:11.297338 containerd[1429]: time="2024-09-04T17:11:11.297233486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:11:11.312503 systemd[1]: Started cri-containerd-780f94bcbf5af59bd947e36928c0aba4862f2c63e4efcc7c1792a0dee60608f7.scope - libcontainer container 780f94bcbf5af59bd947e36928c0aba4862f2c63e4efcc7c1792a0dee60608f7.
Sep  4 17:11:11.313686 systemd[1]: Started cri-containerd-fcf71b999343b55a0a173fffef57dd4c811dc0743a2770aa6eb0a368d554f599.scope - libcontainer container fcf71b999343b55a0a173fffef57dd4c811dc0743a2770aa6eb0a368d554f599.
Sep  4 17:11:11.317755 systemd[1]: Started cri-containerd-644ddbf1928af17d21f7895eee218a9110223890cfeddd70699c10dde4bea1be.scope - libcontainer container 644ddbf1928af17d21f7895eee218a9110223890cfeddd70699c10dde4bea1be.
Sep  4 17:11:11.350117 containerd[1429]: time="2024-09-04T17:11:11.349721911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f5bf8d52acd7337c82951a97b42c345d,Namespace:kube-system,Attempt:0,} returns sandbox id \"fcf71b999343b55a0a173fffef57dd4c811dc0743a2770aa6eb0a368d554f599\""
Sep  4 17:11:11.352800 containerd[1429]: time="2024-09-04T17:11:11.352710273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:cacd2a680dbc59f99275412e0ba6e38b,Namespace:kube-system,Attempt:0,} returns sandbox id \"780f94bcbf5af59bd947e36928c0aba4862f2c63e4efcc7c1792a0dee60608f7\""
Sep  4 17:11:11.354169 kubelet[2126]: E0904 17:11:11.353971    2126 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:11.354169 kubelet[2126]: E0904 17:11:11.354022    2126 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:11.357848 containerd[1429]: time="2024-09-04T17:11:11.355420875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c478d1607a2d82c96996df8e82b907c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"644ddbf1928af17d21f7895eee218a9110223890cfeddd70699c10dde4bea1be\""
Sep  4 17:11:11.357955 kubelet[2126]: E0904 17:11:11.355850    2126 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:11.361187 containerd[1429]: time="2024-09-04T17:11:11.361147823Z" level=info msg="CreateContainer within sandbox \"644ddbf1928af17d21f7895eee218a9110223890cfeddd70699c10dde4bea1be\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
Sep  4 17:11:11.362449 containerd[1429]: time="2024-09-04T17:11:11.362371040Z" level=info msg="CreateContainer within sandbox \"780f94bcbf5af59bd947e36928c0aba4862f2c63e4efcc7c1792a0dee60608f7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
Sep  4 17:11:11.362618 containerd[1429]: time="2024-09-04T17:11:11.362576162Z" level=info msg="CreateContainer within sandbox \"fcf71b999343b55a0a173fffef57dd4c811dc0743a2770aa6eb0a368d554f599\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
Sep  4 17:11:11.396187 containerd[1429]: time="2024-09-04T17:11:11.396082580Z" level=info msg="CreateContainer within sandbox \"fcf71b999343b55a0a173fffef57dd4c811dc0743a2770aa6eb0a368d554f599\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fd15f11c2187507e8c20a3a81b8daea1c2eb79d7e5c78c79139ff4f9a53cc4a5\""
Sep  4 17:11:11.396988 kubelet[2126]: W0904 17:11:11.396864    2126 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.33:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused
Sep  4 17:11:11.396988 kubelet[2126]: E0904 17:11:11.396931    2126 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.33:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused
Sep  4 17:11:11.397113 containerd[1429]: time="2024-09-04T17:11:11.396923017Z" level=info msg="StartContainer for \"fd15f11c2187507e8c20a3a81b8daea1c2eb79d7e5c78c79139ff4f9a53cc4a5\""
Sep  4 17:11:11.401004 containerd[1429]: time="2024-09-04T17:11:11.400813940Z" level=info msg="CreateContainer within sandbox \"644ddbf1928af17d21f7895eee218a9110223890cfeddd70699c10dde4bea1be\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b02c7b607cc65f93ab6ca68e45d443574b5cc7d416e83866c0c14f069fb0b695\""
Sep  4 17:11:11.402295 containerd[1429]: time="2024-09-04T17:11:11.402260988Z" level=info msg="CreateContainer within sandbox \"780f94bcbf5af59bd947e36928c0aba4862f2c63e4efcc7c1792a0dee60608f7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"38ab9176a527ccab448e91d604c2a47c22c76e4fcdc23bb4a00cc9a34a36c565\""
Sep  4 17:11:11.403288 containerd[1429]: time="2024-09-04T17:11:11.403180060Z" level=info msg="StartContainer for \"38ab9176a527ccab448e91d604c2a47c22c76e4fcdc23bb4a00cc9a34a36c565\""
Sep  4 17:11:11.404246 containerd[1429]: time="2024-09-04T17:11:11.403557802Z" level=info msg="StartContainer for \"b02c7b607cc65f93ab6ca68e45d443574b5cc7d416e83866c0c14f069fb0b695\""
Sep  4 17:11:11.437501 systemd[1]: Started cri-containerd-38ab9176a527ccab448e91d604c2a47c22c76e4fcdc23bb4a00cc9a34a36c565.scope - libcontainer container 38ab9176a527ccab448e91d604c2a47c22c76e4fcdc23bb4a00cc9a34a36c565.
Sep  4 17:11:11.438592 systemd[1]: Started cri-containerd-b02c7b607cc65f93ab6ca68e45d443574b5cc7d416e83866c0c14f069fb0b695.scope - libcontainer container b02c7b607cc65f93ab6ca68e45d443574b5cc7d416e83866c0c14f069fb0b695.
Sep  4 17:11:11.439952 systemd[1]: Started cri-containerd-fd15f11c2187507e8c20a3a81b8daea1c2eb79d7e5c78c79139ff4f9a53cc4a5.scope - libcontainer container fd15f11c2187507e8c20a3a81b8daea1c2eb79d7e5c78c79139ff4f9a53cc4a5.
Sep  4 17:11:11.444402 kubelet[2126]: W0904 17:11:11.444348    2126 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused
Sep  4 17:11:11.444532 kubelet[2126]: E0904 17:11:11.444519    2126 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused
Sep  4 17:11:11.472114 kubelet[2126]: W0904 17:11:11.472053    2126 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused
Sep  4 17:11:11.472114 kubelet[2126]: E0904 17:11:11.472107    2126 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused
Sep  4 17:11:11.475375 kubelet[2126]: E0904 17:11:11.475347    2126 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="1.6s"
Sep  4 17:11:11.494447 containerd[1429]: time="2024-09-04T17:11:11.494357284Z" level=info msg="StartContainer for \"38ab9176a527ccab448e91d604c2a47c22c76e4fcdc23bb4a00cc9a34a36c565\" returns successfully"
Sep  4 17:11:11.504995 containerd[1429]: time="2024-09-04T17:11:11.504954512Z" level=info msg="StartContainer for \"b02c7b607cc65f93ab6ca68e45d443574b5cc7d416e83866c0c14f069fb0b695\" returns successfully"
Sep  4 17:11:11.505282 containerd[1429]: time="2024-09-04T17:11:11.505121656Z" level=info msg="StartContainer for \"fd15f11c2187507e8c20a3a81b8daea1c2eb79d7e5c78c79139ff4f9a53cc4a5\" returns successfully"
Sep  4 17:11:11.582293 kubelet[2126]: I0904 17:11:11.582261    2126 kubelet_node_status.go:70] "Attempting to register node" node="localhost"
Sep  4 17:11:11.582943 kubelet[2126]: E0904 17:11:11.582792    2126 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost"
Sep  4 17:11:12.099589 kubelet[2126]: E0904 17:11:12.099390    2126 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:12.102838 kubelet[2126]: E0904 17:11:12.102815    2126 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:12.104135 kubelet[2126]: E0904 17:11:12.104077    2126 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:13.106363 kubelet[2126]: E0904 17:11:13.106256    2126 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:13.184413 kubelet[2126]: I0904 17:11:13.184369    2126 kubelet_node_status.go:70] "Attempting to register node" node="localhost"
Sep  4 17:11:13.242001 kubelet[2126]: E0904 17:11:13.241962    2126 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost"
Sep  4 17:11:13.312588 kubelet[2126]: I0904 17:11:13.312537    2126 kubelet_node_status.go:73] "Successfully registered node" node="localhost"
Sep  4 17:11:14.060239 kubelet[2126]: I0904 17:11:14.060177    2126 apiserver.go:52] "Watching apiserver"
Sep  4 17:11:14.070589 kubelet[2126]: I0904 17:11:14.070550    2126 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Sep  4 17:11:14.113680 kubelet[2126]: E0904 17:11:14.113169    2126 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost"
Sep  4 17:11:14.113680 kubelet[2126]: E0904 17:11:14.113624    2126 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:15.813875 systemd[1]: Reloading requested from client PID 2403 ('systemctl') (unit session-7.scope)...
Sep  4 17:11:15.813892 systemd[1]: Reloading...
Sep  4 17:11:15.882348 zram_generator::config[2443]: No configuration found.
Sep  4 17:11:16.037251 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Sep  4 17:11:16.103740 systemd[1]: Reloading finished in 289 ms.
Sep  4 17:11:16.132269 kubelet[2126]: I0904 17:11:16.132194    2126 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Sep  4 17:11:16.132267 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Sep  4 17:11:16.141846 systemd[1]: kubelet.service: Deactivated successfully.
Sep  4 17:11:16.142106 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Sep  4 17:11:16.142229 systemd[1]: kubelet.service: Consumed 1.847s CPU time, 115.9M memory peak, 0B memory swap peak.
Sep  4 17:11:16.153670 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Sep  4 17:11:16.246800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Sep  4 17:11:16.251100 (kubelet)[2482]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Sep  4 17:11:16.305195 kubelet[2482]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Sep  4 17:11:16.305195 kubelet[2482]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Sep  4 17:11:16.305195 kubelet[2482]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Sep  4 17:11:16.305845 kubelet[2482]: I0904 17:11:16.305251    2482 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Sep  4 17:11:16.312626 kubelet[2482]: I0904 17:11:16.312591    2482 server.go:467] "Kubelet version" kubeletVersion="v1.28.7"
Sep  4 17:11:16.312626 kubelet[2482]: I0904 17:11:16.312620    2482 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Sep  4 17:11:16.313153 kubelet[2482]: I0904 17:11:16.312783    2482 server.go:895] "Client rotation is on, will bootstrap in background"
Sep  4 17:11:16.316330 kubelet[2482]: I0904 17:11:16.314228    2482 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Sep  4 17:11:16.316330 kubelet[2482]: I0904 17:11:16.315104    2482 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Sep  4 17:11:16.324978 kubelet[2482]: W0904 17:11:16.324959    2482 machine.go:65] Cannot read vendor id correctly, set empty.
Sep  4 17:11:16.325674 kubelet[2482]: I0904 17:11:16.325661    2482 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Sep  4 17:11:16.325854 kubelet[2482]: I0904 17:11:16.325844    2482 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Sep  4 17:11:16.326018 kubelet[2482]: I0904 17:11:16.326000    2482 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Sep  4 17:11:16.326099 kubelet[2482]: I0904 17:11:16.326042    2482 topology_manager.go:138] "Creating topology manager with none policy"
Sep  4 17:11:16.326099 kubelet[2482]: I0904 17:11:16.326053    2482 container_manager_linux.go:301] "Creating device plugin manager"
Sep  4 17:11:16.326099 kubelet[2482]: I0904 17:11:16.326089    2482 state_mem.go:36] "Initialized new in-memory state store"
Sep  4 17:11:16.326175 kubelet[2482]: I0904 17:11:16.326163    2482 kubelet.go:393] "Attempting to sync node with API server"
Sep  4 17:11:16.326201 kubelet[2482]: I0904 17:11:16.326179    2482 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests"
Sep  4 17:11:16.326223 kubelet[2482]: I0904 17:11:16.326201    2482 kubelet.go:309] "Adding apiserver pod source"
Sep  4 17:11:16.326223 kubelet[2482]: I0904 17:11:16.326211    2482 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Sep  4 17:11:16.327205 kubelet[2482]: I0904 17:11:16.327081    2482 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1"
Sep  4 17:11:16.327684 kubelet[2482]: I0904 17:11:16.327659    2482 server.go:1232] "Started kubelet"
Sep  4 17:11:16.328998 kubelet[2482]: I0904 17:11:16.328973    2482 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Sep  4 17:11:16.330621 kubelet[2482]: I0904 17:11:16.330409    2482 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
Sep  4 17:11:16.331132 kubelet[2482]: I0904 17:11:16.331105    2482 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Sep  4 17:11:16.331183 kubelet[2482]: I0904 17:11:16.331165    2482 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Sep  4 17:11:16.334326 kubelet[2482]: E0904 17:11:16.331509    2482 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
Sep  4 17:11:16.334326 kubelet[2482]: E0904 17:11:16.331594    2482 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Sep  4 17:11:16.335096 kubelet[2482]: I0904 17:11:16.335069    2482 server.go:462] "Adding debug handlers to kubelet server"
Sep  4 17:11:16.339556 kubelet[2482]: I0904 17:11:16.339533    2482 volume_manager.go:291] "Starting Kubelet Volume Manager"
Sep  4 17:11:16.339733 kubelet[2482]: I0904 17:11:16.339719    2482 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Sep  4 17:11:16.339929 kubelet[2482]: I0904 17:11:16.339916    2482 reconciler_new.go:29] "Reconciler: start to sync state"
Sep  4 17:11:16.364911 kubelet[2482]: I0904 17:11:16.364811    2482 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Sep  4 17:11:16.366880 kubelet[2482]: I0904 17:11:16.366848    2482 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Sep  4 17:11:16.366880 kubelet[2482]: I0904 17:11:16.366877    2482 status_manager.go:217] "Starting to sync pod status with apiserver"
Sep  4 17:11:16.366986 kubelet[2482]: I0904 17:11:16.366896    2482 kubelet.go:2303] "Starting kubelet main sync loop"
Sep  4 17:11:16.366986 kubelet[2482]: E0904 17:11:16.366943    2482 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Sep  4 17:11:16.399992 kubelet[2482]: I0904 17:11:16.399965    2482 cpu_manager.go:214] "Starting CPU manager" policy="none"
Sep  4 17:11:16.400244 kubelet[2482]: I0904 17:11:16.400233    2482 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Sep  4 17:11:16.400362 kubelet[2482]: I0904 17:11:16.400342    2482 state_mem.go:36] "Initialized new in-memory state store"
Sep  4 17:11:16.400599 kubelet[2482]: I0904 17:11:16.400587    2482 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Sep  4 17:11:16.400676 kubelet[2482]: I0904 17:11:16.400666    2482 state_mem.go:96] "Updated CPUSet assignments" assignments={}
Sep  4 17:11:16.400721 kubelet[2482]: I0904 17:11:16.400714    2482 policy_none.go:49] "None policy: Start"
Sep  4 17:11:16.401408 kubelet[2482]: I0904 17:11:16.401390    2482 memory_manager.go:169] "Starting memorymanager" policy="None"
Sep  4 17:11:16.401610 kubelet[2482]: I0904 17:11:16.401573    2482 state_mem.go:35] "Initializing new in-memory state store"
Sep  4 17:11:16.401798 kubelet[2482]: I0904 17:11:16.401782    2482 state_mem.go:75] "Updated machine memory state"
Sep  4 17:11:16.407611 kubelet[2482]: I0904 17:11:16.407591    2482 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Sep  4 17:11:16.407936 kubelet[2482]: I0904 17:11:16.407918    2482 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Sep  4 17:11:16.443166 kubelet[2482]: I0904 17:11:16.443144    2482 kubelet_node_status.go:70] "Attempting to register node" node="localhost"
Sep  4 17:11:16.449907 kubelet[2482]: I0904 17:11:16.449883    2482 kubelet_node_status.go:108] "Node was previously registered" node="localhost"
Sep  4 17:11:16.450186 kubelet[2482]: I0904 17:11:16.450115    2482 kubelet_node_status.go:73] "Successfully registered node" node="localhost"
Sep  4 17:11:16.467262 kubelet[2482]: I0904 17:11:16.467229    2482 topology_manager.go:215] "Topology Admit Handler" podUID="c478d1607a2d82c96996df8e82b907c7" podNamespace="kube-system" podName="kube-apiserver-localhost"
Sep  4 17:11:16.467534 kubelet[2482]: I0904 17:11:16.467355    2482 topology_manager.go:215] "Topology Admit Handler" podUID="f5bf8d52acd7337c82951a97b42c345d" podNamespace="kube-system" podName="kube-controller-manager-localhost"
Sep  4 17:11:16.467534 kubelet[2482]: I0904 17:11:16.467393    2482 topology_manager.go:215] "Topology Admit Handler" podUID="cacd2a680dbc59f99275412e0ba6e38b" podNamespace="kube-system" podName="kube-scheduler-localhost"
Sep  4 17:11:16.640707 kubelet[2482]: I0904 17:11:16.640588    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cacd2a680dbc59f99275412e0ba6e38b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"cacd2a680dbc59f99275412e0ba6e38b\") " pod="kube-system/kube-scheduler-localhost"
Sep  4 17:11:16.640707 kubelet[2482]: I0904 17:11:16.640639    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c478d1607a2d82c96996df8e82b907c7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c478d1607a2d82c96996df8e82b907c7\") " pod="kube-system/kube-apiserver-localhost"
Sep  4 17:11:16.640707 kubelet[2482]: I0904 17:11:16.640663    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost"
Sep  4 17:11:16.640707 kubelet[2482]: I0904 17:11:16.640682    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost"
Sep  4 17:11:16.640887 kubelet[2482]: I0904 17:11:16.640724    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost"
Sep  4 17:11:16.640887 kubelet[2482]: I0904 17:11:16.640749    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost"
Sep  4 17:11:16.640887 kubelet[2482]: I0904 17:11:16.640769    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c478d1607a2d82c96996df8e82b907c7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c478d1607a2d82c96996df8e82b907c7\") " pod="kube-system/kube-apiserver-localhost"
Sep  4 17:11:16.640887 kubelet[2482]: I0904 17:11:16.640789    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c478d1607a2d82c96996df8e82b907c7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c478d1607a2d82c96996df8e82b907c7\") " pod="kube-system/kube-apiserver-localhost"
Sep  4 17:11:16.640887 kubelet[2482]: I0904 17:11:16.640807    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost"
Sep  4 17:11:16.774357 kubelet[2482]: E0904 17:11:16.774297    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:16.774777 kubelet[2482]: E0904 17:11:16.774748    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:16.774863 kubelet[2482]: E0904 17:11:16.774754    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:17.327720 kubelet[2482]: I0904 17:11:17.327497    2482 apiserver.go:52] "Watching apiserver"
Sep  4 17:11:17.340288 kubelet[2482]: I0904 17:11:17.340252    2482 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Sep  4 17:11:17.383775 kubelet[2482]: E0904 17:11:17.383745    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:17.392130 kubelet[2482]: E0904 17:11:17.392098    2482 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost"
Sep  4 17:11:17.392420 kubelet[2482]: E0904 17:11:17.392405    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:17.392727 kubelet[2482]: E0904 17:11:17.392708    2482 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost"
Sep  4 17:11:17.393125 kubelet[2482]: E0904 17:11:17.393108    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:17.434183 kubelet[2482]: I0904 17:11:17.434142    2482 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.434085486 podCreationTimestamp="2024-09-04 17:11:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:11:17.424902812 +0000 UTC m=+1.169161241" watchObservedRunningTime="2024-09-04 17:11:17.434085486 +0000 UTC m=+1.178343875"
Sep  4 17:11:17.456252 kubelet[2482]: I0904 17:11:17.456055    2482 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.455733342 podCreationTimestamp="2024-09-04 17:11:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:11:17.434982814 +0000 UTC m=+1.179241163" watchObservedRunningTime="2024-09-04 17:11:17.455733342 +0000 UTC m=+1.199991731"
Sep  4 17:11:17.456252 kubelet[2482]: I0904 17:11:17.456212    2482 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.456187786 podCreationTimestamp="2024-09-04 17:11:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:11:17.456129505 +0000 UTC m=+1.200387854" watchObservedRunningTime="2024-09-04 17:11:17.456187786 +0000 UTC m=+1.200446135"
Sep  4 17:11:18.386343 kubelet[2482]: E0904 17:11:18.384519    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:18.386343 kubelet[2482]: E0904 17:11:18.384750    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:21.555814 sudo[1609]: pam_unix(sudo:session): session closed for user root
Sep  4 17:11:21.557632 sshd[1606]: pam_unix(sshd:session): session closed for user core
Sep  4 17:11:21.562216 systemd[1]: sshd@6-10.0.0.33:22-10.0.0.1:36300.service: Deactivated successfully.
Sep  4 17:11:21.564711 systemd[1]: session-7.scope: Deactivated successfully.
Sep  4 17:11:21.564904 systemd[1]: session-7.scope: Consumed 6.776s CPU time, 135.7M memory peak, 0B memory swap peak.
Sep  4 17:11:21.565698 systemd-logind[1410]: Session 7 logged out. Waiting for processes to exit.
Sep  4 17:11:21.566636 systemd-logind[1410]: Removed session 7.
Sep  4 17:11:22.090067 kubelet[2482]: E0904 17:11:22.089897    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:22.392431 kubelet[2482]: E0904 17:11:22.392124    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:24.124920 kubelet[2482]: E0904 17:11:24.122406    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:24.419748 kubelet[2482]: E0904 17:11:24.419631    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:24.584232 kubelet[2482]: E0904 17:11:24.584181    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:25.412682 kubelet[2482]: E0904 17:11:25.412429    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:27.106484 update_engine[1414]: I0904 17:11:27.106431  1414 update_attempter.cc:509] Updating boot flags...
Sep  4 17:11:27.139357 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2583)
Sep  4 17:11:27.178473 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2586)
Sep  4 17:11:27.209909 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2586)
Sep  4 17:11:30.581486 kubelet[2482]: I0904 17:11:30.581440    2482 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
Sep  4 17:11:30.581848 containerd[1429]: time="2024-09-04T17:11:30.581811854Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Sep  4 17:11:30.582051 kubelet[2482]: I0904 17:11:30.582023    2482 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
Sep  4 17:11:31.261799 kubelet[2482]: I0904 17:11:31.261758    2482 topology_manager.go:215] "Topology Admit Handler" podUID="1e77eab9-2d82-4e65-9941-d346fdec84c6" podNamespace="kube-system" podName="kube-proxy-gm9nr"
Sep  4 17:11:31.271585 systemd[1]: Created slice kubepods-besteffort-pod1e77eab9_2d82_4e65_9941_d346fdec84c6.slice - libcontainer container kubepods-besteffort-pod1e77eab9_2d82_4e65_9941_d346fdec84c6.slice.
Sep  4 17:11:31.338192 kubelet[2482]: I0904 17:11:31.338140    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m6jp\" (UniqueName: \"kubernetes.io/projected/1e77eab9-2d82-4e65-9941-d346fdec84c6-kube-api-access-7m6jp\") pod \"kube-proxy-gm9nr\" (UID: \"1e77eab9-2d82-4e65-9941-d346fdec84c6\") " pod="kube-system/kube-proxy-gm9nr"
Sep  4 17:11:31.338192 kubelet[2482]: I0904 17:11:31.338191    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1e77eab9-2d82-4e65-9941-d346fdec84c6-kube-proxy\") pod \"kube-proxy-gm9nr\" (UID: \"1e77eab9-2d82-4e65-9941-d346fdec84c6\") " pod="kube-system/kube-proxy-gm9nr"
Sep  4 17:11:31.338372 kubelet[2482]: I0904 17:11:31.338216    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e77eab9-2d82-4e65-9941-d346fdec84c6-lib-modules\") pod \"kube-proxy-gm9nr\" (UID: \"1e77eab9-2d82-4e65-9941-d346fdec84c6\") " pod="kube-system/kube-proxy-gm9nr"
Sep  4 17:11:31.338372 kubelet[2482]: I0904 17:11:31.338237    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e77eab9-2d82-4e65-9941-d346fdec84c6-xtables-lock\") pod \"kube-proxy-gm9nr\" (UID: \"1e77eab9-2d82-4e65-9941-d346fdec84c6\") " pod="kube-system/kube-proxy-gm9nr"
Sep  4 17:11:31.456740 kubelet[2482]: E0904 17:11:31.456693    2482 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
Sep  4 17:11:31.456740 kubelet[2482]: E0904 17:11:31.456741    2482 projected.go:198] Error preparing data for projected volume kube-api-access-7m6jp for pod kube-system/kube-proxy-gm9nr: configmap "kube-root-ca.crt" not found
Sep  4 17:11:31.456889 kubelet[2482]: E0904 17:11:31.456821    2482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1e77eab9-2d82-4e65-9941-d346fdec84c6-kube-api-access-7m6jp podName:1e77eab9-2d82-4e65-9941-d346fdec84c6 nodeName:}" failed. No retries permitted until 2024-09-04 17:11:31.956788007 +0000 UTC m=+15.701046396 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7m6jp" (UniqueName: "kubernetes.io/projected/1e77eab9-2d82-4e65-9941-d346fdec84c6-kube-api-access-7m6jp") pod "kube-proxy-gm9nr" (UID: "1e77eab9-2d82-4e65-9941-d346fdec84c6") : configmap "kube-root-ca.crt" not found
Sep  4 17:11:31.580254 kubelet[2482]: I0904 17:11:31.580206    2482 topology_manager.go:215] "Topology Admit Handler" podUID="aaebc7e9-700a-4502-bc0e-601b1feb8422" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-kdk5g"
Sep  4 17:11:31.589686 systemd[1]: Created slice kubepods-besteffort-podaaebc7e9_700a_4502_bc0e_601b1feb8422.slice - libcontainer container kubepods-besteffort-podaaebc7e9_700a_4502_bc0e_601b1feb8422.slice.
Sep  4 17:11:31.742273 kubelet[2482]: I0904 17:11:31.742234    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w7vr\" (UniqueName: \"kubernetes.io/projected/aaebc7e9-700a-4502-bc0e-601b1feb8422-kube-api-access-4w7vr\") pod \"tigera-operator-5d56685c77-kdk5g\" (UID: \"aaebc7e9-700a-4502-bc0e-601b1feb8422\") " pod="tigera-operator/tigera-operator-5d56685c77-kdk5g"
Sep  4 17:11:31.742645 kubelet[2482]: I0904 17:11:31.742305    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/aaebc7e9-700a-4502-bc0e-601b1feb8422-var-lib-calico\") pod \"tigera-operator-5d56685c77-kdk5g\" (UID: \"aaebc7e9-700a-4502-bc0e-601b1feb8422\") " pod="tigera-operator/tigera-operator-5d56685c77-kdk5g"
Sep  4 17:11:31.893419 containerd[1429]: time="2024-09-04T17:11:31.893194840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-kdk5g,Uid:aaebc7e9-700a-4502-bc0e-601b1feb8422,Namespace:tigera-operator,Attempt:0,}"
Sep  4 17:11:31.915700 containerd[1429]: time="2024-09-04T17:11:31.915570888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 17:11:31.915700 containerd[1429]: time="2024-09-04T17:11:31.915651128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:11:31.915700 containerd[1429]: time="2024-09-04T17:11:31.915671088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 17:11:31.915700 containerd[1429]: time="2024-09-04T17:11:31.915686848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:11:31.938496 systemd[1]: Started cri-containerd-7621667f664f78ba693aba666d751737594a7bf72c9d27f582bf9196f51dd63d.scope - libcontainer container 7621667f664f78ba693aba666d751737594a7bf72c9d27f582bf9196f51dd63d.
Sep  4 17:11:31.973169 containerd[1429]: time="2024-09-04T17:11:31.973057993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-kdk5g,Uid:aaebc7e9-700a-4502-bc0e-601b1feb8422,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7621667f664f78ba693aba666d751737594a7bf72c9d27f582bf9196f51dd63d\""
Sep  4 17:11:31.974765 containerd[1429]: time="2024-09-04T17:11:31.974562559Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\""
Sep  4 17:11:32.183182 kubelet[2482]: E0904 17:11:32.183075    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:32.184008 containerd[1429]: time="2024-09-04T17:11:32.183971429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gm9nr,Uid:1e77eab9-2d82-4e65-9941-d346fdec84c6,Namespace:kube-system,Attempt:0,}"
Sep  4 17:11:32.205340 containerd[1429]: time="2024-09-04T17:11:32.204959987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 17:11:32.205340 containerd[1429]: time="2024-09-04T17:11:32.205020627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:11:32.205340 containerd[1429]: time="2024-09-04T17:11:32.205051708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 17:11:32.205340 containerd[1429]: time="2024-09-04T17:11:32.205064948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:11:32.226493 systemd[1]: Started cri-containerd-b3c6338d23c0b119ae957ad343f110b30605303145638bb66d31d54d48c0c52f.scope - libcontainer container b3c6338d23c0b119ae957ad343f110b30605303145638bb66d31d54d48c0c52f.
Sep  4 17:11:32.248973 containerd[1429]: time="2024-09-04T17:11:32.248917792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gm9nr,Uid:1e77eab9-2d82-4e65-9941-d346fdec84c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3c6338d23c0b119ae957ad343f110b30605303145638bb66d31d54d48c0c52f\""
Sep  4 17:11:32.249705 kubelet[2482]: E0904 17:11:32.249684    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:32.251970 containerd[1429]: time="2024-09-04T17:11:32.251938963Z" level=info msg="CreateContainer within sandbox \"b3c6338d23c0b119ae957ad343f110b30605303145638bb66d31d54d48c0c52f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Sep  4 17:11:32.274457 containerd[1429]: time="2024-09-04T17:11:32.274337607Z" level=info msg="CreateContainer within sandbox \"b3c6338d23c0b119ae957ad343f110b30605303145638bb66d31d54d48c0c52f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d63d3afd932ccbba4c0677243d63e60dc988041cfda045edde6e1be17218222c\""
Sep  4 17:11:32.275527 containerd[1429]: time="2024-09-04T17:11:32.275489531Z" level=info msg="StartContainer for \"d63d3afd932ccbba4c0677243d63e60dc988041cfda045edde6e1be17218222c\""
Sep  4 17:11:32.315516 systemd[1]: Started cri-containerd-d63d3afd932ccbba4c0677243d63e60dc988041cfda045edde6e1be17218222c.scope - libcontainer container d63d3afd932ccbba4c0677243d63e60dc988041cfda045edde6e1be17218222c.
Sep  4 17:11:32.353255 containerd[1429]: time="2024-09-04T17:11:32.353121702Z" level=info msg="StartContainer for \"d63d3afd932ccbba4c0677243d63e60dc988041cfda045edde6e1be17218222c\" returns successfully"
Sep  4 17:11:32.424150 kubelet[2482]: E0904 17:11:32.424114    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:32.461883 kubelet[2482]: I0904 17:11:32.461703    2482 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-gm9nr" podStartSLOduration=1.461652309 podCreationTimestamp="2024-09-04 17:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:11:32.461628469 +0000 UTC m=+16.205886858" watchObservedRunningTime="2024-09-04 17:11:32.461652309 +0000 UTC m=+16.205910698"
Sep  4 17:11:32.956006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3455027168.mount: Deactivated successfully.
Sep  4 17:11:33.258623 containerd[1429]: time="2024-09-04T17:11:33.258511291Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:11:33.260096 containerd[1429]: time="2024-09-04T17:11:33.260057976Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=19485895"
Sep  4 17:11:33.261030 containerd[1429]: time="2024-09-04T17:11:33.260976380Z" level=info msg="ImageCreate event name:\"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:11:33.268104 containerd[1429]: time="2024-09-04T17:11:33.267723484Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:11:33.268659 containerd[1429]: time="2024-09-04T17:11:33.268630727Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"19480102\" in 1.294035128s"
Sep  4 17:11:33.268837 containerd[1429]: time="2024-09-04T17:11:33.268735127Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\""
Sep  4 17:11:33.272608 containerd[1429]: time="2024-09-04T17:11:33.272580421Z" level=info msg="CreateContainer within sandbox \"7621667f664f78ba693aba666d751737594a7bf72c9d27f582bf9196f51dd63d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}"
Sep  4 17:11:33.282015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1706103936.mount: Deactivated successfully.
Sep  4 17:11:33.283002 containerd[1429]: time="2024-09-04T17:11:33.282961698Z" level=info msg="CreateContainer within sandbox \"7621667f664f78ba693aba666d751737594a7bf72c9d27f582bf9196f51dd63d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"cb9e2ef675817c4297021235f6032f0f5d93f4070bb6dfdb9af421a34eb7e437\""
Sep  4 17:11:33.284372 containerd[1429]: time="2024-09-04T17:11:33.283498660Z" level=info msg="StartContainer for \"cb9e2ef675817c4297021235f6032f0f5d93f4070bb6dfdb9af421a34eb7e437\""
Sep  4 17:11:33.311511 systemd[1]: Started cri-containerd-cb9e2ef675817c4297021235f6032f0f5d93f4070bb6dfdb9af421a34eb7e437.scope - libcontainer container cb9e2ef675817c4297021235f6032f0f5d93f4070bb6dfdb9af421a34eb7e437.
Sep  4 17:11:33.331384 containerd[1429]: time="2024-09-04T17:11:33.331335351Z" level=info msg="StartContainer for \"cb9e2ef675817c4297021235f6032f0f5d93f4070bb6dfdb9af421a34eb7e437\" returns successfully"
Sep  4 17:11:33.444473 kubelet[2482]: I0904 17:11:33.444399    2482 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-kdk5g" podStartSLOduration=1.147053497 podCreationTimestamp="2024-09-04 17:11:31 +0000 UTC" firstStartedPulling="2024-09-04 17:11:31.974101438 +0000 UTC m=+15.718359827" lastFinishedPulling="2024-09-04 17:11:33.271412057 +0000 UTC m=+17.015670446" observedRunningTime="2024-09-04 17:11:33.443538073 +0000 UTC m=+17.187796462" watchObservedRunningTime="2024-09-04 17:11:33.444364116 +0000 UTC m=+17.188622865"
Sep  4 17:11:37.637855 kubelet[2482]: I0904 17:11:37.637787    2482 topology_manager.go:215] "Topology Admit Handler" podUID="9061b578-3cd4-4ab0-8a79-9c4a355b43a4" podNamespace="calico-system" podName="calico-typha-5c4d7cdcbd-dxb82"
Sep  4 17:11:37.657450 systemd[1]: Created slice kubepods-besteffort-pod9061b578_3cd4_4ab0_8a79_9c4a355b43a4.slice - libcontainer container kubepods-besteffort-pod9061b578_3cd4_4ab0_8a79_9c4a355b43a4.slice.
Sep  4 17:11:37.682449 kubelet[2482]: I0904 17:11:37.682401    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9061b578-3cd4-4ab0-8a79-9c4a355b43a4-typha-certs\") pod \"calico-typha-5c4d7cdcbd-dxb82\" (UID: \"9061b578-3cd4-4ab0-8a79-9c4a355b43a4\") " pod="calico-system/calico-typha-5c4d7cdcbd-dxb82"
Sep  4 17:11:37.682584 kubelet[2482]: I0904 17:11:37.682489    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xqg6\" (UniqueName: \"kubernetes.io/projected/9061b578-3cd4-4ab0-8a79-9c4a355b43a4-kube-api-access-5xqg6\") pod \"calico-typha-5c4d7cdcbd-dxb82\" (UID: \"9061b578-3cd4-4ab0-8a79-9c4a355b43a4\") " pod="calico-system/calico-typha-5c4d7cdcbd-dxb82"
Sep  4 17:11:37.682584 kubelet[2482]: I0904 17:11:37.682561    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9061b578-3cd4-4ab0-8a79-9c4a355b43a4-tigera-ca-bundle\") pod \"calico-typha-5c4d7cdcbd-dxb82\" (UID: \"9061b578-3cd4-4ab0-8a79-9c4a355b43a4\") " pod="calico-system/calico-typha-5c4d7cdcbd-dxb82"
Sep  4 17:11:37.703534 kubelet[2482]: I0904 17:11:37.703079    2482 topology_manager.go:215] "Topology Admit Handler" podUID="9ef8a9fb-1499-4be4-a6bd-845f55be48dc" podNamespace="calico-system" podName="calico-node-292pb"
Sep  4 17:11:37.714267 systemd[1]: Created slice kubepods-besteffort-pod9ef8a9fb_1499_4be4_a6bd_845f55be48dc.slice - libcontainer container kubepods-besteffort-pod9ef8a9fb_1499_4be4_a6bd_845f55be48dc.slice.
Sep  4 17:11:37.816731 kubelet[2482]: I0904 17:11:37.816376    2482 topology_manager.go:215] "Topology Admit Handler" podUID="ad9729de-5f0a-425d-b5ea-b886ce65bfc9" podNamespace="calico-system" podName="csi-node-driver-chwdn"
Sep  4 17:11:37.818937 kubelet[2482]: E0904 17:11:37.818884    2482 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-chwdn" podUID="ad9729de-5f0a-425d-b5ea-b886ce65bfc9"
Sep  4 17:11:37.884559 kubelet[2482]: I0904 17:11:37.884517    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9ef8a9fb-1499-4be4-a6bd-845f55be48dc-cni-net-dir\") pod \"calico-node-292pb\" (UID: \"9ef8a9fb-1499-4be4-a6bd-845f55be48dc\") " pod="calico-system/calico-node-292pb"
Sep  4 17:11:37.884559 kubelet[2482]: I0904 17:11:37.884559    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ad9729de-5f0a-425d-b5ea-b886ce65bfc9-socket-dir\") pod \"csi-node-driver-chwdn\" (UID: \"ad9729de-5f0a-425d-b5ea-b886ce65bfc9\") " pod="calico-system/csi-node-driver-chwdn"
Sep  4 17:11:37.884765 kubelet[2482]: I0904 17:11:37.884637    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9ef8a9fb-1499-4be4-a6bd-845f55be48dc-policysync\") pod \"calico-node-292pb\" (UID: \"9ef8a9fb-1499-4be4-a6bd-845f55be48dc\") " pod="calico-system/calico-node-292pb"
Sep  4 17:11:37.884765 kubelet[2482]: I0904 17:11:37.884675    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcl4p\" (UniqueName: \"kubernetes.io/projected/9ef8a9fb-1499-4be4-a6bd-845f55be48dc-kube-api-access-lcl4p\") pod \"calico-node-292pb\" (UID: \"9ef8a9fb-1499-4be4-a6bd-845f55be48dc\") " pod="calico-system/calico-node-292pb"
Sep  4 17:11:37.884765 kubelet[2482]: I0904 17:11:37.884701    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9ef8a9fb-1499-4be4-a6bd-845f55be48dc-cni-log-dir\") pod \"calico-node-292pb\" (UID: \"9ef8a9fb-1499-4be4-a6bd-845f55be48dc\") " pod="calico-system/calico-node-292pb"
Sep  4 17:11:37.884765 kubelet[2482]: I0904 17:11:37.884720    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ad9729de-5f0a-425d-b5ea-b886ce65bfc9-registration-dir\") pod \"csi-node-driver-chwdn\" (UID: \"ad9729de-5f0a-425d-b5ea-b886ce65bfc9\") " pod="calico-system/csi-node-driver-chwdn"
Sep  4 17:11:37.884765 kubelet[2482]: I0904 17:11:37.884744    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ef8a9fb-1499-4be4-a6bd-845f55be48dc-xtables-lock\") pod \"calico-node-292pb\" (UID: \"9ef8a9fb-1499-4be4-a6bd-845f55be48dc\") " pod="calico-system/calico-node-292pb"
Sep  4 17:11:37.884953 kubelet[2482]: I0904 17:11:37.884762    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ef8a9fb-1499-4be4-a6bd-845f55be48dc-tigera-ca-bundle\") pod \"calico-node-292pb\" (UID: \"9ef8a9fb-1499-4be4-a6bd-845f55be48dc\") " pod="calico-system/calico-node-292pb"
Sep  4 17:11:37.884953 kubelet[2482]: I0904 17:11:37.884782    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9ef8a9fb-1499-4be4-a6bd-845f55be48dc-cni-bin-dir\") pod \"calico-node-292pb\" (UID: \"9ef8a9fb-1499-4be4-a6bd-845f55be48dc\") " pod="calico-system/calico-node-292pb"
Sep  4 17:11:37.884953 kubelet[2482]: I0904 17:11:37.884805    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ad9729de-5f0a-425d-b5ea-b886ce65bfc9-kubelet-dir\") pod \"csi-node-driver-chwdn\" (UID: \"ad9729de-5f0a-425d-b5ea-b886ce65bfc9\") " pod="calico-system/csi-node-driver-chwdn"
Sep  4 17:11:37.884953 kubelet[2482]: I0904 17:11:37.884825    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ef8a9fb-1499-4be4-a6bd-845f55be48dc-lib-modules\") pod \"calico-node-292pb\" (UID: \"9ef8a9fb-1499-4be4-a6bd-845f55be48dc\") " pod="calico-system/calico-node-292pb"
Sep  4 17:11:37.884953 kubelet[2482]: I0904 17:11:37.884857    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9ef8a9fb-1499-4be4-a6bd-845f55be48dc-var-run-calico\") pod \"calico-node-292pb\" (UID: \"9ef8a9fb-1499-4be4-a6bd-845f55be48dc\") " pod="calico-system/calico-node-292pb"
Sep  4 17:11:37.885088 kubelet[2482]: I0904 17:11:37.884878    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9ef8a9fb-1499-4be4-a6bd-845f55be48dc-var-lib-calico\") pod \"calico-node-292pb\" (UID: \"9ef8a9fb-1499-4be4-a6bd-845f55be48dc\") " pod="calico-system/calico-node-292pb"
Sep  4 17:11:37.885088 kubelet[2482]: I0904 17:11:37.884897    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9ef8a9fb-1499-4be4-a6bd-845f55be48dc-flexvol-driver-host\") pod \"calico-node-292pb\" (UID: \"9ef8a9fb-1499-4be4-a6bd-845f55be48dc\") " pod="calico-system/calico-node-292pb"
Sep  4 17:11:37.885088 kubelet[2482]: I0904 17:11:37.884931    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9ef8a9fb-1499-4be4-a6bd-845f55be48dc-node-certs\") pod \"calico-node-292pb\" (UID: \"9ef8a9fb-1499-4be4-a6bd-845f55be48dc\") " pod="calico-system/calico-node-292pb"
Sep  4 17:11:37.885088 kubelet[2482]: I0904 17:11:37.884950    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ad9729de-5f0a-425d-b5ea-b886ce65bfc9-varrun\") pod \"csi-node-driver-chwdn\" (UID: \"ad9729de-5f0a-425d-b5ea-b886ce65bfc9\") " pod="calico-system/csi-node-driver-chwdn"
Sep  4 17:11:37.885088 kubelet[2482]: I0904 17:11:37.884969    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skbmt\" (UniqueName: \"kubernetes.io/projected/ad9729de-5f0a-425d-b5ea-b886ce65bfc9-kube-api-access-skbmt\") pod \"csi-node-driver-chwdn\" (UID: \"ad9729de-5f0a-425d-b5ea-b886ce65bfc9\") " pod="calico-system/csi-node-driver-chwdn"
Sep  4 17:11:37.980439 kubelet[2482]: E0904 17:11:37.978412    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:37.996213 containerd[1429]: time="2024-09-04T17:11:37.996125094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c4d7cdcbd-dxb82,Uid:9061b578-3cd4-4ab0-8a79-9c4a355b43a4,Namespace:calico-system,Attempt:0,}"
Sep  4 17:11:38.028739 kubelet[2482]: E0904 17:11:38.028655    2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:11:38.028739 kubelet[2482]: W0904 17:11:38.028697    2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:11:38.028739 kubelet[2482]: E0904 17:11:38.028732    2482 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:11:38.029169 kubelet[2482]: E0904 17:11:38.029020    2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:11:38.029169 kubelet[2482]: W0904 17:11:38.029044    2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:11:38.029169 kubelet[2482]: E0904 17:11:38.029060    2482 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:11:38.070538 containerd[1429]: time="2024-09-04T17:11:38.070267149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 17:11:38.070538 containerd[1429]: time="2024-09-04T17:11:38.070482710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:11:38.070738 containerd[1429]: time="2024-09-04T17:11:38.070511670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 17:11:38.070738 containerd[1429]: time="2024-09-04T17:11:38.070550550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:11:38.092472 systemd[1]: Started cri-containerd-35ac898e30de8a9c26ff21991a862d90806877e7fbefe4f2940d33ae36d4756d.scope - libcontainer container 35ac898e30de8a9c26ff21991a862d90806877e7fbefe4f2940d33ae36d4756d.
Sep  4 17:11:38.131673 containerd[1429]: time="2024-09-04T17:11:38.131622966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c4d7cdcbd-dxb82,Uid:9061b578-3cd4-4ab0-8a79-9c4a355b43a4,Namespace:calico-system,Attempt:0,} returns sandbox id \"35ac898e30de8a9c26ff21991a862d90806877e7fbefe4f2940d33ae36d4756d\""
Sep  4 17:11:38.132497 kubelet[2482]: E0904 17:11:38.132472    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:38.133653 containerd[1429]: time="2024-09-04T17:11:38.133625372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\""
Sep  4 17:11:38.318238 kubelet[2482]: E0904 17:11:38.316409    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:38.318388 containerd[1429]: time="2024-09-04T17:11:38.316901581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-292pb,Uid:9ef8a9fb-1499-4be4-a6bd-845f55be48dc,Namespace:calico-system,Attempt:0,}"
Sep  4 17:11:38.347195 containerd[1429]: time="2024-09-04T17:11:38.346958668Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 17:11:38.347195 containerd[1429]: time="2024-09-04T17:11:38.347046708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:11:38.347195 containerd[1429]: time="2024-09-04T17:11:38.347062188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 17:11:38.347195 containerd[1429]: time="2024-09-04T17:11:38.347072908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:11:38.372180 systemd[1]: Started cri-containerd-f4c2ef6b2900d30aa420cce785d133f73afc30dcc7bcfc8bcf38e18315794784.scope - libcontainer container f4c2ef6b2900d30aa420cce785d133f73afc30dcc7bcfc8bcf38e18315794784.
Sep  4 17:11:38.397553 containerd[1429]: time="2024-09-04T17:11:38.397467454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-292pb,Uid:9ef8a9fb-1499-4be4-a6bd-845f55be48dc,Namespace:calico-system,Attempt:0,} returns sandbox id \"f4c2ef6b2900d30aa420cce785d133f73afc30dcc7bcfc8bcf38e18315794784\""
Sep  4 17:11:38.399939 kubelet[2482]: E0904 17:11:38.398868    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:39.367796 kubelet[2482]: E0904 17:11:39.367751    2482 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-chwdn" podUID="ad9729de-5f0a-425d-b5ea-b886ce65bfc9"
Sep  4 17:11:40.916127 containerd[1429]: time="2024-09-04T17:11:40.915179646Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:11:40.916127 containerd[1429]: time="2024-09-04T17:11:40.915816528Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=27474479"
Sep  4 17:11:40.916838 containerd[1429]: time="2024-09-04T17:11:40.916805491Z" level=info msg="ImageCreate event name:\"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:11:40.919277 containerd[1429]: time="2024-09-04T17:11:40.919240097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:11:40.920212 containerd[1429]: time="2024-09-04T17:11:40.920179340Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"28841990\" in 2.786515808s"
Sep  4 17:11:40.920325 containerd[1429]: time="2024-09-04T17:11:40.920293340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\""
Sep  4 17:11:40.921479 containerd[1429]: time="2024-09-04T17:11:40.921440503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\""
Sep  4 17:11:40.934846 containerd[1429]: time="2024-09-04T17:11:40.934803539Z" level=info msg="CreateContainer within sandbox \"35ac898e30de8a9c26ff21991a862d90806877e7fbefe4f2940d33ae36d4756d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}"
Sep  4 17:11:40.950501 containerd[1429]: time="2024-09-04T17:11:40.950451620Z" level=info msg="CreateContainer within sandbox \"35ac898e30de8a9c26ff21991a862d90806877e7fbefe4f2940d33ae36d4756d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c711b321da05047e6d728cd6a60af69f117e5a9d83be1fc0794145d101336354\""
Sep  4 17:11:40.952557 containerd[1429]: time="2024-09-04T17:11:40.951334223Z" level=info msg="StartContainer for \"c711b321da05047e6d728cd6a60af69f117e5a9d83be1fc0794145d101336354\""
Sep  4 17:11:40.976496 systemd[1]: Started cri-containerd-c711b321da05047e6d728cd6a60af69f117e5a9d83be1fc0794145d101336354.scope - libcontainer container c711b321da05047e6d728cd6a60af69f117e5a9d83be1fc0794145d101336354.
Sep  4 17:11:41.034385 containerd[1429]: time="2024-09-04T17:11:41.034298721Z" level=info msg="StartContainer for \"c711b321da05047e6d728cd6a60af69f117e5a9d83be1fc0794145d101336354\" returns successfully"
Sep  4 17:11:41.367840 kubelet[2482]: E0904 17:11:41.367479    2482 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-chwdn" podUID="ad9729de-5f0a-425d-b5ea-b886ce65bfc9"
Sep  4 17:11:41.460174 kubelet[2482]: E0904 17:11:41.460130    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:41.511649 kubelet[2482]: E0904 17:11:41.511609    2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:11:41.511649 kubelet[2482]: W0904 17:11:41.511631    2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:11:41.511649 kubelet[2482]: E0904 17:11:41.511654    2482 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:11:41.512079 kubelet[2482]: E0904 17:11:41.512047    2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:11:41.512079 kubelet[2482]: W0904 17:11:41.512061    2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:11:41.512079 kubelet[2482]: E0904 17:11:41.512074    2482 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:11:41.514307 kubelet[2482]: E0904 17:11:41.514279    2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:11:41.514307 kubelet[2482]: W0904 17:11:41.514298    2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:11:41.514307 kubelet[2482]: E0904 17:11:41.514318    2482 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:11:41.514909 kubelet[2482]: E0904 17:11:41.514877    2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:11:41.514909 kubelet[2482]: W0904 17:11:41.514898    2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:11:41.514909 kubelet[2482]: E0904 17:11:41.514912    2482 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:11:41.515188 kubelet[2482]: E0904 17:11:41.515168    2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:11:41.515188 kubelet[2482]: W0904 17:11:41.515181    2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:11:41.515247 kubelet[2482]: E0904 17:11:41.515193    2482 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:11:41.515382 kubelet[2482]: E0904 17:11:41.515368    2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:11:41.515382 kubelet[2482]: W0904 17:11:41.515380    2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:11:41.515438 kubelet[2482]: E0904 17:11:41.515391    2482 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:11:41.516423 kubelet[2482]: E0904 17:11:41.516390    2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:11:41.516423 kubelet[2482]: W0904 17:11:41.516404    2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:11:41.516423 kubelet[2482]: E0904 17:11:41.516417    2482 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:11:41.516717 kubelet[2482]: E0904 17:11:41.516647    2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:11:41.516717 kubelet[2482]: W0904 17:11:41.516658    2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:11:41.516717 kubelet[2482]: E0904 17:11:41.516672    2482 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:11:41.517002 kubelet[2482]: E0904 17:11:41.516976    2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:11:41.517002 kubelet[2482]: W0904 17:11:41.516989    2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:11:41.517002 kubelet[2482]: E0904 17:11:41.517000    2482 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:11:41.517211 kubelet[2482]: E0904 17:11:41.517166    2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:11:41.517211 kubelet[2482]: W0904 17:11:41.517202    2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:11:41.517362 kubelet[2482]: E0904 17:11:41.517216    2482 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:11:41.517566 kubelet[2482]: E0904 17:11:41.517538    2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:11:41.517566 kubelet[2482]: W0904 17:11:41.517551    2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:11:41.517566 kubelet[2482]: E0904 17:11:41.517563    2482 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:11:41.517718 kubelet[2482]: E0904 17:11:41.517702    2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:11:41.517754 kubelet[2482]: W0904 17:11:41.517718    2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:11:41.517781 kubelet[2482]: E0904 17:11:41.517758    2482 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:11:41.518089 kubelet[2482]: E0904 17:11:41.518056    2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:11:41.518089 kubelet[2482]: W0904 17:11:41.518077    2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:11:41.518089 kubelet[2482]: E0904 17:11:41.518090    2482 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:11:41.518249 kubelet[2482]: E0904 17:11:41.518235    2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:11:41.518249 kubelet[2482]: W0904 17:11:41.518245    2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:11:41.518304 kubelet[2482]: E0904 17:11:41.518256    2482 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:11:41.518667 kubelet[2482]: E0904 17:11:41.518608    2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:11:41.518667 kubelet[2482]: W0904 17:11:41.518636    2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:11:41.518667 kubelet[2482]: E0904 17:11:41.518649    2482 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:11:41.518961 kubelet[2482]: E0904 17:11:41.518945    2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:11:41.518961 kubelet[2482]: W0904 17:11:41.518958    2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:11:41.519143 kubelet[2482]: E0904 17:11:41.518973    2482 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:11:41.519588 kubelet[2482]: E0904 17:11:41.519567    2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:11:41.519588 kubelet[2482]: W0904 17:11:41.519584    2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:11:41.519655 kubelet[2482]: E0904 17:11:41.519601    2482 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:11:41.519849 kubelet[2482]: E0904 17:11:41.519834    2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:11:41.519849 kubelet[2482]: W0904 17:11:41.519845    2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:11:41.519900 kubelet[2482]: E0904 17:11:41.519858    2482 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:11:41.520127 kubelet[2482]: E0904 17:11:41.520112    2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:11:41.520127 kubelet[2482]: W0904 17:11:41.520124    2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:11:41.520185 kubelet[2482]: E0904 17:11:41.520173    2482 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:11:41.520451 kubelet[2482]: E0904 17:11:41.520435    2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:11:41.520483 kubelet[2482]: W0904 17:11:41.520452    2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:11:41.520554 kubelet[2482]: E0904 17:11:41.520511    2482 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:11:41.520817 kubelet[2482]: E0904 17:11:41.520800    2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:11:41.520857 kubelet[2482]: W0904 17:11:41.520819    2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:11:41.520902 kubelet[2482]: E0904 17:11:41.520888    2482 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:11:41.521092 kubelet[2482]: E0904 17:11:41.521079    2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:11:41.521092 kubelet[2482]: W0904 17:11:41.521091    2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:11:41.521151 kubelet[2482]: E0904 17:11:41.521116    2482 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:11:41.521348 kubelet[2482]: E0904 17:11:41.521335    2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:11:41.521381 kubelet[2482]: W0904 17:11:41.521347    2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:11:41.521404 kubelet[2482]: E0904 17:11:41.521385    2482 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:11:41.521599 kubelet[2482]: E0904 17:11:41.521588    2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:11:41.521599 kubelet[2482]: W0904 17:11:41.521599    2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:11:41.521652 kubelet[2482]: E0904 17:11:41.521614    2482 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:11:41.527881 kubelet[2482]: E0904 17:11:41.527835    2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:11:41.527881 kubelet[2482]: W0904 17:11:41.527858    2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:11:41.527881 kubelet[2482]: E0904 17:11:41.527881    2482 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:11:41.529161 kubelet[2482]: E0904 17:11:41.529139    2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:11:41.529161 kubelet[2482]: W0904 17:11:41.529156    2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:11:41.529256 kubelet[2482]: E0904 17:11:41.529232    2482 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:11:41.529452 kubelet[2482]: E0904 17:11:41.529433    2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:11:41.529452 kubelet[2482]: W0904 17:11:41.529452    2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:11:41.529509 kubelet[2482]: E0904 17:11:41.529483    2482 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:11:41.529610 kubelet[2482]: E0904 17:11:41.529598    2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:11:41.529610 kubelet[2482]: W0904 17:11:41.529607    2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:11:41.529691 kubelet[2482]: E0904 17:11:41.529625    2482 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:11:41.529796 kubelet[2482]: E0904 17:11:41.529782    2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:11:41.529796 kubelet[2482]: W0904 17:11:41.529792    2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:11:41.529871 kubelet[2482]: E0904 17:11:41.529806    2482 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:11:41.530240 kubelet[2482]: E0904 17:11:41.530227    2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:11:41.530240 kubelet[2482]: W0904 17:11:41.530240    2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:11:41.530300 kubelet[2482]: E0904 17:11:41.530256    2482 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:11:41.530503 kubelet[2482]: E0904 17:11:41.530486    2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:11:41.530538 kubelet[2482]: W0904 17:11:41.530502    2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:11:41.530538 kubelet[2482]: E0904 17:11:41.530516    2482 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:11:41.530712 kubelet[2482]: E0904 17:11:41.530698    2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:11:41.530712 kubelet[2482]: W0904 17:11:41.530709    2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:11:41.530758 kubelet[2482]: E0904 17:11:41.530721    2482 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:11:41.531086 kubelet[2482]: E0904 17:11:41.531071    2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Sep  4 17:11:41.531086 kubelet[2482]: W0904 17:11:41.531083    2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Sep  4 17:11:41.531147 kubelet[2482]: E0904 17:11:41.531095    2482 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Sep  4 17:11:42.127100 containerd[1429]: time="2024-09-04T17:11:42.126570632Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:11:42.127963 containerd[1429]: time="2024-09-04T17:11:42.127923276Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=4916957"
Sep  4 17:11:42.128873 containerd[1429]: time="2024-09-04T17:11:42.128830278Z" level=info msg="ImageCreate event name:\"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:11:42.131180 containerd[1429]: time="2024-09-04T17:11:42.131152684Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:11:42.132161 containerd[1429]: time="2024-09-04T17:11:42.131900045Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6284436\" in 1.210411342s"
Sep  4 17:11:42.132161 containerd[1429]: time="2024-09-04T17:11:42.131949886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\""
Sep  4 17:11:42.134399 containerd[1429]: time="2024-09-04T17:11:42.134346731Z" level=info msg="CreateContainer within sandbox \"f4c2ef6b2900d30aa420cce785d133f73afc30dcc7bcfc8bcf38e18315794784\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}"
Sep  4 17:11:42.159704 containerd[1429]: time="2024-09-04T17:11:42.159650474Z" level=info msg="CreateContainer within sandbox \"f4c2ef6b2900d30aa420cce785d133f73afc30dcc7bcfc8bcf38e18315794784\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f3a69d511c66f96728dd71416326da7ea24386be663f55e9379466a964191ff0\""
Sep  4 17:11:42.160338 containerd[1429]: time="2024-09-04T17:11:42.160300316Z" level=info msg="StartContainer for \"f3a69d511c66f96728dd71416326da7ea24386be663f55e9379466a964191ff0\""
Sep  4 17:11:42.184760 systemd[1]: run-containerd-runc-k8s.io-f3a69d511c66f96728dd71416326da7ea24386be663f55e9379466a964191ff0-runc.2KzN3a.mount: Deactivated successfully.
Sep  4 17:11:42.193503 systemd[1]: Started cri-containerd-f3a69d511c66f96728dd71416326da7ea24386be663f55e9379466a964191ff0.scope - libcontainer container f3a69d511c66f96728dd71416326da7ea24386be663f55e9379466a964191ff0.
Sep  4 17:11:42.226925 containerd[1429]: time="2024-09-04T17:11:42.226880360Z" level=info msg="StartContainer for \"f3a69d511c66f96728dd71416326da7ea24386be663f55e9379466a964191ff0\" returns successfully"
Sep  4 17:11:42.250500 systemd[1]: cri-containerd-f3a69d511c66f96728dd71416326da7ea24386be663f55e9379466a964191ff0.scope: Deactivated successfully.
Sep  4 17:11:42.324890 containerd[1429]: time="2024-09-04T17:11:42.320164871Z" level=info msg="shim disconnected" id=f3a69d511c66f96728dd71416326da7ea24386be663f55e9379466a964191ff0 namespace=k8s.io
Sep  4 17:11:42.324890 containerd[1429]: time="2024-09-04T17:11:42.324892763Z" level=warning msg="cleaning up after shim disconnected" id=f3a69d511c66f96728dd71416326da7ea24386be663f55e9379466a964191ff0 namespace=k8s.io
Sep  4 17:11:42.325134 containerd[1429]: time="2024-09-04T17:11:42.324909763Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Sep  4 17:11:42.467942 kubelet[2482]: E0904 17:11:42.467427    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:42.470222 containerd[1429]: time="2024-09-04T17:11:42.469110319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\""
Sep  4 17:11:42.472577 kubelet[2482]: I0904 17:11:42.472085    2482 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Sep  4 17:11:42.474275 kubelet[2482]: E0904 17:11:42.473989    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:42.484976 kubelet[2482]: I0904 17:11:42.484122    2482 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-5c4d7cdcbd-dxb82" podStartSLOduration=2.696682146 podCreationTimestamp="2024-09-04 17:11:37 +0000 UTC" firstStartedPulling="2024-09-04 17:11:38.133294411 +0000 UTC m=+21.877552760" lastFinishedPulling="2024-09-04 17:11:40.920694141 +0000 UTC m=+24.664952490" observedRunningTime="2024-09-04 17:11:41.485733079 +0000 UTC m=+25.229991468" watchObservedRunningTime="2024-09-04 17:11:42.484081876 +0000 UTC m=+26.228340265"
Sep  4 17:11:42.928187 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3a69d511c66f96728dd71416326da7ea24386be663f55e9379466a964191ff0-rootfs.mount: Deactivated successfully.
Sep  4 17:11:43.367607 kubelet[2482]: E0904 17:11:43.367557    2482 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-chwdn" podUID="ad9729de-5f0a-425d-b5ea-b886ce65bfc9"
Sep  4 17:11:45.199609 containerd[1429]: time="2024-09-04T17:11:45.199547525Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:11:45.200107 containerd[1429]: time="2024-09-04T17:11:45.200048686Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=86859887"
Sep  4 17:11:45.202740 containerd[1429]: time="2024-09-04T17:11:45.202702652Z" level=info msg="ImageCreate event name:\"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:11:45.205611 containerd[1429]: time="2024-09-04T17:11:45.205385058Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:11:45.206230 containerd[1429]: time="2024-09-04T17:11:45.206136500Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"88227406\" in 2.73694982s"
Sep  4 17:11:45.206230 containerd[1429]: time="2024-09-04T17:11:45.206191060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\""
Sep  4 17:11:45.208369 containerd[1429]: time="2024-09-04T17:11:45.208331185Z" level=info msg="CreateContainer within sandbox \"f4c2ef6b2900d30aa420cce785d133f73afc30dcc7bcfc8bcf38e18315794784\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}"
Sep  4 17:11:45.223866 containerd[1429]: time="2024-09-04T17:11:45.223738499Z" level=info msg="CreateContainer within sandbox \"f4c2ef6b2900d30aa420cce785d133f73afc30dcc7bcfc8bcf38e18315794784\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5d126139e3328e98e57c3b04c7264bdf7b37e4532878f386be573ce45bc35530\""
Sep  4 17:11:45.225810 containerd[1429]: time="2024-09-04T17:11:45.224257220Z" level=info msg="StartContainer for \"5d126139e3328e98e57c3b04c7264bdf7b37e4532878f386be573ce45bc35530\""
Sep  4 17:11:45.244689 systemd[1]: run-containerd-runc-k8s.io-5d126139e3328e98e57c3b04c7264bdf7b37e4532878f386be573ce45bc35530-runc.7AKdvu.mount: Deactivated successfully.
Sep  4 17:11:45.258486 systemd[1]: Started cri-containerd-5d126139e3328e98e57c3b04c7264bdf7b37e4532878f386be573ce45bc35530.scope - libcontainer container 5d126139e3328e98e57c3b04c7264bdf7b37e4532878f386be573ce45bc35530.
Sep  4 17:11:45.316596 containerd[1429]: time="2024-09-04T17:11:45.316550265Z" level=info msg="StartContainer for \"5d126139e3328e98e57c3b04c7264bdf7b37e4532878f386be573ce45bc35530\" returns successfully"
Sep  4 17:11:45.368213 kubelet[2482]: E0904 17:11:45.367874    2482 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-chwdn" podUID="ad9729de-5f0a-425d-b5ea-b886ce65bfc9"
Sep  4 17:11:45.473389 kubelet[2482]: E0904 17:11:45.473269    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:45.871192 containerd[1429]: time="2024-09-04T17:11:45.871062620Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE         \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Sep  4 17:11:45.873085 systemd[1]: cri-containerd-5d126139e3328e98e57c3b04c7264bdf7b37e4532878f386be573ce45bc35530.scope: Deactivated successfully.
Sep  4 17:11:45.891534 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d126139e3328e98e57c3b04c7264bdf7b37e4532878f386be573ce45bc35530-rootfs.mount: Deactivated successfully.
Sep  4 17:11:45.934376 kubelet[2482]: I0904 17:11:45.934338    2482 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
Sep  4 17:11:45.935935 containerd[1429]: time="2024-09-04T17:11:45.935694484Z" level=info msg="shim disconnected" id=5d126139e3328e98e57c3b04c7264bdf7b37e4532878f386be573ce45bc35530 namespace=k8s.io
Sep  4 17:11:45.935935 containerd[1429]: time="2024-09-04T17:11:45.935752084Z" level=warning msg="cleaning up after shim disconnected" id=5d126139e3328e98e57c3b04c7264bdf7b37e4532878f386be573ce45bc35530 namespace=k8s.io
Sep  4 17:11:45.935935 containerd[1429]: time="2024-09-04T17:11:45.935767884Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Sep  4 17:11:45.952366 kubelet[2482]: I0904 17:11:45.951856    2482 topology_manager.go:215] "Topology Admit Handler" podUID="7bb12c91-2761-4677-914c-40c5d21a7ccb" podNamespace="kube-system" podName="coredns-5dd5756b68-c5ss6"
Sep  4 17:11:45.956401 kubelet[2482]: I0904 17:11:45.956359    2482 topology_manager.go:215] "Topology Admit Handler" podUID="952dfd7d-4c90-4dae-9fa9-05a48a9c20ce" podNamespace="kube-system" podName="coredns-5dd5756b68-kx782"
Sep  4 17:11:45.958155 kubelet[2482]: I0904 17:11:45.958117    2482 topology_manager.go:215] "Topology Admit Handler" podUID="a8d222b3-d5db-4dcc-9a51-2bef0d400fc1" podNamespace="calico-system" podName="calico-kube-controllers-78c75d8fb8-b7r8w"
Sep  4 17:11:45.962254 systemd[1]: Created slice kubepods-burstable-pod7bb12c91_2761_4677_914c_40c5d21a7ccb.slice - libcontainer container kubepods-burstable-pod7bb12c91_2761_4677_914c_40c5d21a7ccb.slice.
Sep  4 17:11:45.976693 systemd[1]: Created slice kubepods-burstable-pod952dfd7d_4c90_4dae_9fa9_05a48a9c20ce.slice - libcontainer container kubepods-burstable-pod952dfd7d_4c90_4dae_9fa9_05a48a9c20ce.slice.
Sep  4 17:11:45.982118 systemd[1]: Created slice kubepods-besteffort-poda8d222b3_d5db_4dcc_9a51_2bef0d400fc1.slice - libcontainer container kubepods-besteffort-poda8d222b3_d5db_4dcc_9a51_2bef0d400fc1.slice.
Sep  4 17:11:46.048297 kubelet[2482]: I0904 17:11:46.048262    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cskl7\" (UniqueName: \"kubernetes.io/projected/7bb12c91-2761-4677-914c-40c5d21a7ccb-kube-api-access-cskl7\") pod \"coredns-5dd5756b68-c5ss6\" (UID: \"7bb12c91-2761-4677-914c-40c5d21a7ccb\") " pod="kube-system/coredns-5dd5756b68-c5ss6"
Sep  4 17:11:46.048537 kubelet[2482]: I0904 17:11:46.048394    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7bb12c91-2761-4677-914c-40c5d21a7ccb-config-volume\") pod \"coredns-5dd5756b68-c5ss6\" (UID: \"7bb12c91-2761-4677-914c-40c5d21a7ccb\") " pod="kube-system/coredns-5dd5756b68-c5ss6"
Sep  4 17:11:46.149363 kubelet[2482]: I0904 17:11:46.149236    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8d222b3-d5db-4dcc-9a51-2bef0d400fc1-tigera-ca-bundle\") pod \"calico-kube-controllers-78c75d8fb8-b7r8w\" (UID: \"a8d222b3-d5db-4dcc-9a51-2bef0d400fc1\") " pod="calico-system/calico-kube-controllers-78c75d8fb8-b7r8w"
Sep  4 17:11:46.149363 kubelet[2482]: I0904 17:11:46.149288    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8zln\" (UniqueName: \"kubernetes.io/projected/952dfd7d-4c90-4dae-9fa9-05a48a9c20ce-kube-api-access-r8zln\") pod \"coredns-5dd5756b68-kx782\" (UID: \"952dfd7d-4c90-4dae-9fa9-05a48a9c20ce\") " pod="kube-system/coredns-5dd5756b68-kx782"
Sep  4 17:11:46.149363 kubelet[2482]: I0904 17:11:46.149331    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvz7j\" (UniqueName: \"kubernetes.io/projected/a8d222b3-d5db-4dcc-9a51-2bef0d400fc1-kube-api-access-pvz7j\") pod \"calico-kube-controllers-78c75d8fb8-b7r8w\" (UID: \"a8d222b3-d5db-4dcc-9a51-2bef0d400fc1\") " pod="calico-system/calico-kube-controllers-78c75d8fb8-b7r8w"
Sep  4 17:11:46.149363 kubelet[2482]: I0904 17:11:46.149354    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/952dfd7d-4c90-4dae-9fa9-05a48a9c20ce-config-volume\") pod \"coredns-5dd5756b68-kx782\" (UID: \"952dfd7d-4c90-4dae-9fa9-05a48a9c20ce\") " pod="kube-system/coredns-5dd5756b68-kx782"
Sep  4 17:11:46.274415 kubelet[2482]: E0904 17:11:46.274384    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:46.275193 containerd[1429]: time="2024-09-04T17:11:46.275155020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-c5ss6,Uid:7bb12c91-2761-4677-914c-40c5d21a7ccb,Namespace:kube-system,Attempt:0,}"
Sep  4 17:11:46.280158 kubelet[2482]: E0904 17:11:46.280127    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:46.280997 containerd[1429]: time="2024-09-04T17:11:46.280958192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-kx782,Uid:952dfd7d-4c90-4dae-9fa9-05a48a9c20ce,Namespace:kube-system,Attempt:0,}"
Sep  4 17:11:46.285327 containerd[1429]: time="2024-09-04T17:11:46.285261721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78c75d8fb8-b7r8w,Uid:a8d222b3-d5db-4dcc-9a51-2bef0d400fc1,Namespace:calico-system,Attempt:0,}"
Sep  4 17:11:46.497438 kubelet[2482]: E0904 17:11:46.491900    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:46.514382 containerd[1429]: time="2024-09-04T17:11:46.514247494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\""
Sep  4 17:11:46.678141 containerd[1429]: time="2024-09-04T17:11:46.677880367Z" level=error msg="Failed to destroy network for sandbox \"7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Sep  4 17:11:46.678319 containerd[1429]: time="2024-09-04T17:11:46.678264448Z" level=error msg="encountered an error cleaning up failed sandbox \"7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Sep  4 17:11:46.678361 containerd[1429]: time="2024-09-04T17:11:46.678344648Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-c5ss6,Uid:7bb12c91-2761-4677-914c-40c5d21a7ccb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Sep  4 17:11:46.683065 containerd[1429]: time="2024-09-04T17:11:46.682963858Z" level=error msg="Failed to destroy network for sandbox \"9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Sep  4 17:11:46.683537 containerd[1429]: time="2024-09-04T17:11:46.683292658Z" level=error msg="encountered an error cleaning up failed sandbox \"9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Sep  4 17:11:46.683537 containerd[1429]: time="2024-09-04T17:11:46.683360379Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-kx782,Uid:952dfd7d-4c90-4dae-9fa9-05a48a9c20ce,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Sep  4 17:11:46.684221 kubelet[2482]: E0904 17:11:46.683624    2482 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Sep  4 17:11:46.684221 kubelet[2482]: E0904 17:11:46.683702    2482 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-kx782"
Sep  4 17:11:46.684221 kubelet[2482]: E0904 17:11:46.683722    2482 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-kx782"
Sep  4 17:11:46.684386 kubelet[2482]: E0904 17:11:46.683788    2482 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-kx782_kube-system(952dfd7d-4c90-4dae-9fa9-05a48a9c20ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-kx782_kube-system(952dfd7d-4c90-4dae-9fa9-05a48a9c20ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-kx782" podUID="952dfd7d-4c90-4dae-9fa9-05a48a9c20ce"
Sep  4 17:11:46.684835 kubelet[2482]: E0904 17:11:46.684676    2482 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Sep  4 17:11:46.684835 kubelet[2482]: E0904 17:11:46.684739    2482 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-c5ss6"
Sep  4 17:11:46.684835 kubelet[2482]: E0904 17:11:46.684759    2482 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-c5ss6"
Sep  4 17:11:46.684954 kubelet[2482]: E0904 17:11:46.684805    2482 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-c5ss6_kube-system(7bb12c91-2761-4677-914c-40c5d21a7ccb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-c5ss6_kube-system(7bb12c91-2761-4677-914c-40c5d21a7ccb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-c5ss6" podUID="7bb12c91-2761-4677-914c-40c5d21a7ccb"
Sep  4 17:11:46.688545 containerd[1429]: time="2024-09-04T17:11:46.688501350Z" level=error msg="Failed to destroy network for sandbox \"54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Sep  4 17:11:46.688908 containerd[1429]: time="2024-09-04T17:11:46.688880111Z" level=error msg="encountered an error cleaning up failed sandbox \"54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Sep  4 17:11:46.688947 containerd[1429]: time="2024-09-04T17:11:46.688932471Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78c75d8fb8-b7r8w,Uid:a8d222b3-d5db-4dcc-9a51-2bef0d400fc1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Sep  4 17:11:46.689160 kubelet[2482]: E0904 17:11:46.689135    2482 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Sep  4 17:11:46.689237 kubelet[2482]: E0904 17:11:46.689187    2482 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78c75d8fb8-b7r8w"
Sep  4 17:11:46.689237 kubelet[2482]: E0904 17:11:46.689207    2482 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78c75d8fb8-b7r8w"
Sep  4 17:11:46.689330 kubelet[2482]: E0904 17:11:46.689255    2482 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78c75d8fb8-b7r8w_calico-system(a8d222b3-d5db-4dcc-9a51-2bef0d400fc1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78c75d8fb8-b7r8w_calico-system(a8d222b3-d5db-4dcc-9a51-2bef0d400fc1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78c75d8fb8-b7r8w" podUID="a8d222b3-d5db-4dcc-9a51-2bef0d400fc1"
Sep  4 17:11:47.222407 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e-shm.mount: Deactivated successfully.
Sep  4 17:11:47.374415 systemd[1]: Created slice kubepods-besteffort-podad9729de_5f0a_425d_b5ea_b886ce65bfc9.slice - libcontainer container kubepods-besteffort-podad9729de_5f0a_425d_b5ea_b886ce65bfc9.slice.
Sep  4 17:11:47.382213 containerd[1429]: time="2024-09-04T17:11:47.382168178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-chwdn,Uid:ad9729de-5f0a-425d-b5ea-b886ce65bfc9,Namespace:calico-system,Attempt:0,}"
Sep  4 17:11:47.436021 containerd[1429]: time="2024-09-04T17:11:47.435859970Z" level=error msg="Failed to destroy network for sandbox \"d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Sep  4 17:11:47.436786 containerd[1429]: time="2024-09-04T17:11:47.436750292Z" level=error msg="encountered an error cleaning up failed sandbox \"d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Sep  4 17:11:47.436957 containerd[1429]: time="2024-09-04T17:11:47.436901372Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-chwdn,Uid:ad9729de-5f0a-425d-b5ea-b886ce65bfc9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Sep  4 17:11:47.437538 kubelet[2482]: E0904 17:11:47.437499    2482 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Sep  4 17:11:47.437595 kubelet[2482]: E0904 17:11:47.437557    2482 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-chwdn"
Sep  4 17:11:47.437595 kubelet[2482]: E0904 17:11:47.437579    2482 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-chwdn"
Sep  4 17:11:47.437658 kubelet[2482]: E0904 17:11:47.437628    2482 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-chwdn_calico-system(ad9729de-5f0a-425d-b5ea-b886ce65bfc9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-chwdn_calico-system(ad9729de-5f0a-425d-b5ea-b886ce65bfc9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-chwdn" podUID="ad9729de-5f0a-425d-b5ea-b886ce65bfc9"
Sep  4 17:11:47.438445 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0-shm.mount: Deactivated successfully.
Sep  4 17:11:47.497433 kubelet[2482]: I0904 17:11:47.497082    2482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0"
Sep  4 17:11:47.498373 containerd[1429]: time="2024-09-04T17:11:47.497781259Z" level=info msg="StopPodSandbox for \"d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0\""
Sep  4 17:11:47.498678 containerd[1429]: time="2024-09-04T17:11:47.498642381Z" level=info msg="Ensure that sandbox d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0 in task-service has been cleanup successfully"
Sep  4 17:11:47.499388 kubelet[2482]: I0904 17:11:47.499289    2482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071"
Sep  4 17:11:47.500049 containerd[1429]: time="2024-09-04T17:11:47.500006184Z" level=info msg="StopPodSandbox for \"54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071\""
Sep  4 17:11:47.500239 containerd[1429]: time="2024-09-04T17:11:47.500216224Z" level=info msg="Ensure that sandbox 54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071 in task-service has been cleanup successfully"
Sep  4 17:11:47.503491 kubelet[2482]: I0904 17:11:47.503464    2482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e"
Sep  4 17:11:47.504422 containerd[1429]: time="2024-09-04T17:11:47.504393353Z" level=info msg="StopPodSandbox for \"7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e\""
Sep  4 17:11:47.504608 containerd[1429]: time="2024-09-04T17:11:47.504586713Z" level=info msg="Ensure that sandbox 7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e in task-service has been cleanup successfully"
Sep  4 17:11:47.505570 kubelet[2482]: I0904 17:11:47.505541    2482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598"
Sep  4 17:11:47.506048 containerd[1429]: time="2024-09-04T17:11:47.506000596Z" level=info msg="StopPodSandbox for \"9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598\""
Sep  4 17:11:47.506228 containerd[1429]: time="2024-09-04T17:11:47.506198636Z" level=info msg="Ensure that sandbox 9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598 in task-service has been cleanup successfully"
Sep  4 17:11:47.545687 containerd[1429]: time="2024-09-04T17:11:47.545635239Z" level=error msg="StopPodSandbox for \"7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e\" failed" error="failed to destroy network for sandbox \"7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Sep  4 17:11:47.546371 kubelet[2482]: E0904 17:11:47.546149    2482 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e"
Sep  4 17:11:47.546371 kubelet[2482]: E0904 17:11:47.546225    2482 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e"}
Sep  4 17:11:47.546371 kubelet[2482]: E0904 17:11:47.546290    2482 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7bb12c91-2761-4677-914c-40c5d21a7ccb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Sep  4 17:11:47.546371 kubelet[2482]: E0904 17:11:47.546348    2482 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7bb12c91-2761-4677-914c-40c5d21a7ccb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-c5ss6" podUID="7bb12c91-2761-4677-914c-40c5d21a7ccb"
Sep  4 17:11:47.549185 containerd[1429]: time="2024-09-04T17:11:47.549145886Z" level=error msg="StopPodSandbox for \"d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0\" failed" error="failed to destroy network for sandbox \"d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Sep  4 17:11:47.549548 kubelet[2482]: E0904 17:11:47.549408    2482 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0"
Sep  4 17:11:47.549548 kubelet[2482]: E0904 17:11:47.549450    2482 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0"}
Sep  4 17:11:47.549548 kubelet[2482]: E0904 17:11:47.549481    2482 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ad9729de-5f0a-425d-b5ea-b886ce65bfc9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Sep  4 17:11:47.549548 kubelet[2482]: E0904 17:11:47.549516    2482 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ad9729de-5f0a-425d-b5ea-b886ce65bfc9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-chwdn" podUID="ad9729de-5f0a-425d-b5ea-b886ce65bfc9"
Sep  4 17:11:47.552393 containerd[1429]: time="2024-09-04T17:11:47.552353413Z" level=error msg="StopPodSandbox for \"54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071\" failed" error="failed to destroy network for sandbox \"54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Sep  4 17:11:47.552703 kubelet[2482]: E0904 17:11:47.552579    2482 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071"
Sep  4 17:11:47.552703 kubelet[2482]: E0904 17:11:47.552612    2482 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071"}
Sep  4 17:11:47.552703 kubelet[2482]: E0904 17:11:47.552643    2482 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a8d222b3-d5db-4dcc-9a51-2bef0d400fc1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Sep  4 17:11:47.552703 kubelet[2482]: E0904 17:11:47.552667    2482 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a8d222b3-d5db-4dcc-9a51-2bef0d400fc1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78c75d8fb8-b7r8w" podUID="a8d222b3-d5db-4dcc-9a51-2bef0d400fc1"
Sep  4 17:11:47.559019 containerd[1429]: time="2024-09-04T17:11:47.558976507Z" level=error msg="StopPodSandbox for \"9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598\" failed" error="failed to destroy network for sandbox \"9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Sep  4 17:11:47.559302 kubelet[2482]: E0904 17:11:47.559265    2482 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598"
Sep  4 17:11:47.559502 kubelet[2482]: E0904 17:11:47.559420    2482 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598"}
Sep  4 17:11:47.559502 kubelet[2482]: E0904 17:11:47.559459    2482 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"952dfd7d-4c90-4dae-9fa9-05a48a9c20ce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Sep  4 17:11:47.559502 kubelet[2482]: E0904 17:11:47.559484    2482 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"952dfd7d-4c90-4dae-9fa9-05a48a9c20ce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-kx782" podUID="952dfd7d-4c90-4dae-9fa9-05a48a9c20ce"
Sep  4 17:11:49.887945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount603120669.mount: Deactivated successfully.
Sep  4 17:11:49.964656 containerd[1429]: time="2024-09-04T17:11:49.964531102Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:11:49.965904 containerd[1429]: time="2024-09-04T17:11:49.965695544Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=113057300"
Sep  4 17:11:49.966962 containerd[1429]: time="2024-09-04T17:11:49.966708706Z" level=info msg="ImageCreate event name:\"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:11:49.969499 containerd[1429]: time="2024-09-04T17:11:49.969300951Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:11:49.969966 containerd[1429]: time="2024-09-04T17:11:49.969942793Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"113057162\" in 3.455658538s"
Sep  4 17:11:49.970020 containerd[1429]: time="2024-09-04T17:11:49.969972353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\""
Sep  4 17:11:49.987843 containerd[1429]: time="2024-09-04T17:11:49.987795708Z" level=info msg="CreateContainer within sandbox \"f4c2ef6b2900d30aa420cce785d133f73afc30dcc7bcfc8bcf38e18315794784\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}"
Sep  4 17:11:50.007557 containerd[1429]: time="2024-09-04T17:11:50.007495026Z" level=info msg="CreateContainer within sandbox \"f4c2ef6b2900d30aa420cce785d133f73afc30dcc7bcfc8bcf38e18315794784\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"385ab9336daf1a275549f20361b50635ef10307abc407358cec3da6339e9da38\""
Sep  4 17:11:50.008561 containerd[1429]: time="2024-09-04T17:11:50.008263267Z" level=info msg="StartContainer for \"385ab9336daf1a275549f20361b50635ef10307abc407358cec3da6339e9da38\""
Sep  4 17:11:50.064517 systemd[1]: Started cri-containerd-385ab9336daf1a275549f20361b50635ef10307abc407358cec3da6339e9da38.scope - libcontainer container 385ab9336daf1a275549f20361b50635ef10307abc407358cec3da6339e9da38.
Sep  4 17:11:50.105015 containerd[1429]: time="2024-09-04T17:11:50.103148488Z" level=info msg="StartContainer for \"385ab9336daf1a275549f20361b50635ef10307abc407358cec3da6339e9da38\" returns successfully"
Sep  4 17:11:50.273348 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information.
Sep  4 17:11:50.273486 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
Sep  4 17:11:50.515786 kubelet[2482]: E0904 17:11:50.515739    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:51.281069 kubelet[2482]: I0904 17:11:51.281023    2482 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Sep  4 17:11:51.281701 kubelet[2482]: E0904 17:11:51.281680    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:51.300772 kubelet[2482]: I0904 17:11:51.300728    2482 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-292pb" podStartSLOduration=2.731021585 podCreationTimestamp="2024-09-04 17:11:37 +0000 UTC" firstStartedPulling="2024-09-04 17:11:38.400549022 +0000 UTC m=+22.144807411" lastFinishedPulling="2024-09-04 17:11:49.970214233 +0000 UTC m=+33.714472582" observedRunningTime="2024-09-04 17:11:50.529119261 +0000 UTC m=+34.273377650" watchObservedRunningTime="2024-09-04 17:11:51.300686756 +0000 UTC m=+35.044945145"
Sep  4 17:11:51.516390 kubelet[2482]: E0904 17:11:51.516353    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:51.516747 kubelet[2482]: E0904 17:11:51.516629    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:51.815372 kernel: bpftool[3650]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
Sep  4 17:11:51.984614 systemd-networkd[1373]: vxlan.calico: Link UP
Sep  4 17:11:51.984621 systemd-networkd[1373]: vxlan.calico: Gained carrier
Sep  4 17:11:52.518757 kubelet[2482]: E0904 17:11:52.518663    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:53.368520 systemd-networkd[1373]: vxlan.calico: Gained IPv6LL
Sep  4 17:11:53.596658 systemd[1]: Started sshd@7-10.0.0.33:22-10.0.0.1:56008.service - OpenSSH per-connection server daemon (10.0.0.1:56008).
Sep  4 17:11:53.647010 sshd[3749]: Accepted publickey for core from 10.0.0.1 port 56008 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA
Sep  4 17:11:53.648869 sshd[3749]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:11:53.655862 systemd-logind[1410]: New session 8 of user core.
Sep  4 17:11:53.659566 systemd[1]: Started session-8.scope - Session 8 of User core.
Sep  4 17:11:53.904436 sshd[3749]: pam_unix(sshd:session): session closed for user core
Sep  4 17:11:53.908780 systemd[1]: sshd@7-10.0.0.33:22-10.0.0.1:56008.service: Deactivated successfully.
Sep  4 17:11:53.910868 systemd[1]: session-8.scope: Deactivated successfully.
Sep  4 17:11:53.911573 systemd-logind[1410]: Session 8 logged out. Waiting for processes to exit.
Sep  4 17:11:53.912901 systemd-logind[1410]: Removed session 8.
Sep  4 17:11:58.369997 containerd[1429]: time="2024-09-04T17:11:58.369623911Z" level=info msg="StopPodSandbox for \"7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e\""
Sep  4 17:11:58.369997 containerd[1429]: time="2024-09-04T17:11:58.369717831Z" level=info msg="StopPodSandbox for \"54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071\""
Sep  4 17:11:58.599800 containerd[1429]: 2024-09-04 17:11:58.458 [INFO][3804] k8s.go 608: Cleaning up netns ContainerID="7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e"
Sep  4 17:11:58.599800 containerd[1429]: 2024-09-04 17:11:58.459 [INFO][3804] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e" iface="eth0" netns="/var/run/netns/cni-5d22979d-a1bf-e938-bebd-fd504be10a6a"
Sep  4 17:11:58.599800 containerd[1429]: 2024-09-04 17:11:58.460 [INFO][3804] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e" iface="eth0" netns="/var/run/netns/cni-5d22979d-a1bf-e938-bebd-fd504be10a6a"
Sep  4 17:11:58.599800 containerd[1429]: 2024-09-04 17:11:58.461 [INFO][3804] dataplane_linux.go 568: Workload's veth was already gone.  Nothing to do. ContainerID="7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e" iface="eth0" netns="/var/run/netns/cni-5d22979d-a1bf-e938-bebd-fd504be10a6a"
Sep  4 17:11:58.599800 containerd[1429]: 2024-09-04 17:11:58.461 [INFO][3804] k8s.go 615: Releasing IP address(es) ContainerID="7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e"
Sep  4 17:11:58.599800 containerd[1429]: 2024-09-04 17:11:58.461 [INFO][3804] utils.go 188: Calico CNI releasing IP address ContainerID="7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e"
Sep  4 17:11:58.599800 containerd[1429]: 2024-09-04 17:11:58.576 [INFO][3818] ipam_plugin.go 417: Releasing address using handleID ContainerID="7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e" HandleID="k8s-pod-network.7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e" Workload="localhost-k8s-coredns--5dd5756b68--c5ss6-eth0"
Sep  4 17:11:58.599800 containerd[1429]: 2024-09-04 17:11:58.581 [INFO][3818] ipam_plugin.go 358: About to acquire host-wide IPAM lock.
Sep  4 17:11:58.599800 containerd[1429]: 2024-09-04 17:11:58.581 [INFO][3818] ipam_plugin.go 373: Acquired host-wide IPAM lock.
Sep  4 17:11:58.599800 containerd[1429]: 2024-09-04 17:11:58.591 [WARNING][3818] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e" HandleID="k8s-pod-network.7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e" Workload="localhost-k8s-coredns--5dd5756b68--c5ss6-eth0"
Sep  4 17:11:58.599800 containerd[1429]: 2024-09-04 17:11:58.591 [INFO][3818] ipam_plugin.go 445: Releasing address using workloadID ContainerID="7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e" HandleID="k8s-pod-network.7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e" Workload="localhost-k8s-coredns--5dd5756b68--c5ss6-eth0"
Sep  4 17:11:58.599800 containerd[1429]: 2024-09-04 17:11:58.592 [INFO][3818] ipam_plugin.go 379: Released host-wide IPAM lock.
Sep  4 17:11:58.599800 containerd[1429]: 2024-09-04 17:11:58.594 [INFO][3804] k8s.go 621: Teardown processing complete. ContainerID="7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e"
Sep  4 17:11:58.600424 containerd[1429]: time="2024-09-04T17:11:58.600380953Z" level=info msg="TearDown network for sandbox \"7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e\" successfully"
Sep  4 17:11:58.600474 containerd[1429]: time="2024-09-04T17:11:58.600432073Z" level=info msg="StopPodSandbox for \"7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e\" returns successfully"
Sep  4 17:11:58.600807 kubelet[2482]: E0904 17:11:58.600772    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:58.601610 containerd[1429]: time="2024-09-04T17:11:58.601578315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-c5ss6,Uid:7bb12c91-2761-4677-914c-40c5d21a7ccb,Namespace:kube-system,Attempt:1,}"
Sep  4 17:11:58.605713 systemd[1]: run-netns-cni\x2d5d22979d\x2da1bf\x2de938\x2dbebd\x2dfd504be10a6a.mount: Deactivated successfully.
Sep  4 17:11:58.614223 containerd[1429]: 2024-09-04 17:11:58.461 [INFO][3799] k8s.go 608: Cleaning up netns ContainerID="54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071"
Sep  4 17:11:58.614223 containerd[1429]: 2024-09-04 17:11:58.461 [INFO][3799] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071" iface="eth0" netns="/var/run/netns/cni-30947657-c176-5a67-3c27-d69aa942cb1b"
Sep  4 17:11:58.614223 containerd[1429]: 2024-09-04 17:11:58.461 [INFO][3799] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071" iface="eth0" netns="/var/run/netns/cni-30947657-c176-5a67-3c27-d69aa942cb1b"
Sep  4 17:11:58.614223 containerd[1429]: 2024-09-04 17:11:58.462 [INFO][3799] dataplane_linux.go 568: Workload's veth was already gone.  Nothing to do. ContainerID="54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071" iface="eth0" netns="/var/run/netns/cni-30947657-c176-5a67-3c27-d69aa942cb1b"
Sep  4 17:11:58.614223 containerd[1429]: 2024-09-04 17:11:58.462 [INFO][3799] k8s.go 615: Releasing IP address(es) ContainerID="54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071"
Sep  4 17:11:58.614223 containerd[1429]: 2024-09-04 17:11:58.462 [INFO][3799] utils.go 188: Calico CNI releasing IP address ContainerID="54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071"
Sep  4 17:11:58.614223 containerd[1429]: 2024-09-04 17:11:58.576 [INFO][3819] ipam_plugin.go 417: Releasing address using handleID ContainerID="54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071" HandleID="k8s-pod-network.54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071" Workload="localhost-k8s-calico--kube--controllers--78c75d8fb8--b7r8w-eth0"
Sep  4 17:11:58.614223 containerd[1429]: 2024-09-04 17:11:58.581 [INFO][3819] ipam_plugin.go 358: About to acquire host-wide IPAM lock.
Sep  4 17:11:58.614223 containerd[1429]: 2024-09-04 17:11:58.592 [INFO][3819] ipam_plugin.go 373: Acquired host-wide IPAM lock.
Sep  4 17:11:58.614223 containerd[1429]: 2024-09-04 17:11:58.608 [WARNING][3819] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071" HandleID="k8s-pod-network.54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071" Workload="localhost-k8s-calico--kube--controllers--78c75d8fb8--b7r8w-eth0"
Sep  4 17:11:58.614223 containerd[1429]: 2024-09-04 17:11:58.608 [INFO][3819] ipam_plugin.go 445: Releasing address using workloadID ContainerID="54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071" HandleID="k8s-pod-network.54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071" Workload="localhost-k8s-calico--kube--controllers--78c75d8fb8--b7r8w-eth0"
Sep  4 17:11:58.614223 containerd[1429]: 2024-09-04 17:11:58.610 [INFO][3819] ipam_plugin.go 379: Released host-wide IPAM lock.
Sep  4 17:11:58.614223 containerd[1429]: 2024-09-04 17:11:58.612 [INFO][3799] k8s.go 621: Teardown processing complete. ContainerID="54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071"
Sep  4 17:11:58.614223 containerd[1429]: time="2024-09-04T17:11:58.614064294Z" level=info msg="TearDown network for sandbox \"54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071\" successfully"
Sep  4 17:11:58.614223 containerd[1429]: time="2024-09-04T17:11:58.614131134Z" level=info msg="StopPodSandbox for \"54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071\" returns successfully"
Sep  4 17:11:58.614739 containerd[1429]: time="2024-09-04T17:11:58.614709335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78c75d8fb8-b7r8w,Uid:a8d222b3-d5db-4dcc-9a51-2bef0d400fc1,Namespace:calico-system,Attempt:1,}"
Sep  4 17:11:58.616651 systemd[1]: run-netns-cni\x2d30947657\x2dc176\x2d5a67\x2d3c27\x2dd69aa942cb1b.mount: Deactivated successfully.
Sep  4 17:11:58.767862 systemd-networkd[1373]: calid40c5f4a50c: Link UP
Sep  4 17:11:58.769903 systemd-networkd[1373]: calid40c5f4a50c: Gained carrier
Sep  4 17:11:58.785649 containerd[1429]: 2024-09-04 17:11:58.681 [INFO][3834] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--c5ss6-eth0 coredns-5dd5756b68- kube-system  7bb12c91-2761-4677-914c-40c5d21a7ccb 801 0 2024-09-04 17:11:31 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s  localhost  coredns-5dd5756b68-c5ss6 eth0 coredns [] []   [kns.kube-system ksa.kube-system.coredns] calid40c5f4a50c  [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="34d50c5a8f4bf76419fd782e0fdb29bc0aac9046ca64982bb54bf60dbab03d6d" Namespace="kube-system" Pod="coredns-5dd5756b68-c5ss6" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--c5ss6-"
Sep  4 17:11:58.785649 containerd[1429]: 2024-09-04 17:11:58.681 [INFO][3834] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="34d50c5a8f4bf76419fd782e0fdb29bc0aac9046ca64982bb54bf60dbab03d6d" Namespace="kube-system" Pod="coredns-5dd5756b68-c5ss6" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--c5ss6-eth0"
Sep  4 17:11:58.785649 containerd[1429]: 2024-09-04 17:11:58.710 [INFO][3860] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="34d50c5a8f4bf76419fd782e0fdb29bc0aac9046ca64982bb54bf60dbab03d6d" HandleID="k8s-pod-network.34d50c5a8f4bf76419fd782e0fdb29bc0aac9046ca64982bb54bf60dbab03d6d" Workload="localhost-k8s-coredns--5dd5756b68--c5ss6-eth0"
Sep  4 17:11:58.785649 containerd[1429]: 2024-09-04 17:11:58.726 [INFO][3860] ipam_plugin.go 270: Auto assigning IP ContainerID="34d50c5a8f4bf76419fd782e0fdb29bc0aac9046ca64982bb54bf60dbab03d6d" HandleID="k8s-pod-network.34d50c5a8f4bf76419fd782e0fdb29bc0aac9046ca64982bb54bf60dbab03d6d" Workload="localhost-k8s-coredns--5dd5756b68--c5ss6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003447e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-c5ss6", "timestamp":"2024-09-04 17:11:58.710710486 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Sep  4 17:11:58.785649 containerd[1429]: 2024-09-04 17:11:58.726 [INFO][3860] ipam_plugin.go 358: About to acquire host-wide IPAM lock.
Sep  4 17:11:58.785649 containerd[1429]: 2024-09-04 17:11:58.726 [INFO][3860] ipam_plugin.go 373: Acquired host-wide IPAM lock.
Sep  4 17:11:58.785649 containerd[1429]: 2024-09-04 17:11:58.726 [INFO][3860] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost'
Sep  4 17:11:58.785649 containerd[1429]: 2024-09-04 17:11:58.732 [INFO][3860] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.34d50c5a8f4bf76419fd782e0fdb29bc0aac9046ca64982bb54bf60dbab03d6d" host="localhost"
Sep  4 17:11:58.785649 containerd[1429]: 2024-09-04 17:11:58.738 [INFO][3860] ipam.go 372: Looking up existing affinities for host host="localhost"
Sep  4 17:11:58.785649 containerd[1429]: 2024-09-04 17:11:58.743 [INFO][3860] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost"
Sep  4 17:11:58.785649 containerd[1429]: 2024-09-04 17:11:58.745 [INFO][3860] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost"
Sep  4 17:11:58.785649 containerd[1429]: 2024-09-04 17:11:58.747 [INFO][3860] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost"
Sep  4 17:11:58.785649 containerd[1429]: 2024-09-04 17:11:58.748 [INFO][3860] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.34d50c5a8f4bf76419fd782e0fdb29bc0aac9046ca64982bb54bf60dbab03d6d" host="localhost"
Sep  4 17:11:58.785649 containerd[1429]: 2024-09-04 17:11:58.749 [INFO][3860] ipam.go 1685: Creating new handle: k8s-pod-network.34d50c5a8f4bf76419fd782e0fdb29bc0aac9046ca64982bb54bf60dbab03d6d
Sep  4 17:11:58.785649 containerd[1429]: 2024-09-04 17:11:58.754 [INFO][3860] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.34d50c5a8f4bf76419fd782e0fdb29bc0aac9046ca64982bb54bf60dbab03d6d" host="localhost"
Sep  4 17:11:58.785649 containerd[1429]: 2024-09-04 17:11:58.758 [INFO][3860] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.34d50c5a8f4bf76419fd782e0fdb29bc0aac9046ca64982bb54bf60dbab03d6d" host="localhost"
Sep  4 17:11:58.785649 containerd[1429]: 2024-09-04 17:11:58.758 [INFO][3860] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.34d50c5a8f4bf76419fd782e0fdb29bc0aac9046ca64982bb54bf60dbab03d6d" host="localhost"
Sep  4 17:11:58.785649 containerd[1429]: 2024-09-04 17:11:58.758 [INFO][3860] ipam_plugin.go 379: Released host-wide IPAM lock.
Sep  4 17:11:58.785649 containerd[1429]: 2024-09-04 17:11:58.758 [INFO][3860] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="34d50c5a8f4bf76419fd782e0fdb29bc0aac9046ca64982bb54bf60dbab03d6d" HandleID="k8s-pod-network.34d50c5a8f4bf76419fd782e0fdb29bc0aac9046ca64982bb54bf60dbab03d6d" Workload="localhost-k8s-coredns--5dd5756b68--c5ss6-eth0"
Sep  4 17:11:58.786165 containerd[1429]: 2024-09-04 17:11:58.760 [INFO][3834] k8s.go 386: Populated endpoint ContainerID="34d50c5a8f4bf76419fd782e0fdb29bc0aac9046ca64982bb54bf60dbab03d6d" Namespace="kube-system" Pod="coredns-5dd5756b68-c5ss6" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--c5ss6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--c5ss6-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"7bb12c91-2761-4677-914c-40c5d21a7ccb", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 31, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-c5ss6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid40c5f4a50c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Sep  4 17:11:58.786165 containerd[1429]: 2024-09-04 17:11:58.761 [INFO][3834] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="34d50c5a8f4bf76419fd782e0fdb29bc0aac9046ca64982bb54bf60dbab03d6d" Namespace="kube-system" Pod="coredns-5dd5756b68-c5ss6" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--c5ss6-eth0"
Sep  4 17:11:58.786165 containerd[1429]: 2024-09-04 17:11:58.761 [INFO][3834] dataplane_linux.go 68: Setting the host side veth name to calid40c5f4a50c ContainerID="34d50c5a8f4bf76419fd782e0fdb29bc0aac9046ca64982bb54bf60dbab03d6d" Namespace="kube-system" Pod="coredns-5dd5756b68-c5ss6" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--c5ss6-eth0"
Sep  4 17:11:58.786165 containerd[1429]: 2024-09-04 17:11:58.768 [INFO][3834] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="34d50c5a8f4bf76419fd782e0fdb29bc0aac9046ca64982bb54bf60dbab03d6d" Namespace="kube-system" Pod="coredns-5dd5756b68-c5ss6" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--c5ss6-eth0"
Sep  4 17:11:58.786165 containerd[1429]: 2024-09-04 17:11:58.769 [INFO][3834] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="34d50c5a8f4bf76419fd782e0fdb29bc0aac9046ca64982bb54bf60dbab03d6d" Namespace="kube-system" Pod="coredns-5dd5756b68-c5ss6" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--c5ss6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--c5ss6-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"7bb12c91-2761-4677-914c-40c5d21a7ccb", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 31, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"34d50c5a8f4bf76419fd782e0fdb29bc0aac9046ca64982bb54bf60dbab03d6d", Pod:"coredns-5dd5756b68-c5ss6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid40c5f4a50c", MAC:"ae:6f:07:c5:c9:35", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Sep  4 17:11:58.786165 containerd[1429]: 2024-09-04 17:11:58.783 [INFO][3834] k8s.go 500: Wrote updated endpoint to datastore ContainerID="34d50c5a8f4bf76419fd782e0fdb29bc0aac9046ca64982bb54bf60dbab03d6d" Namespace="kube-system" Pod="coredns-5dd5756b68-c5ss6" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--c5ss6-eth0"
Sep  4 17:11:58.807949 systemd-networkd[1373]: cali3f7da7331b9: Link UP
Sep  4 17:11:58.810830 systemd-networkd[1373]: cali3f7da7331b9: Gained carrier
Sep  4 17:11:58.825174 containerd[1429]: time="2024-09-04T17:11:58.824865065Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 17:11:58.825174 containerd[1429]: time="2024-09-04T17:11:58.824926465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:11:58.825174 containerd[1429]: time="2024-09-04T17:11:58.824950625Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 17:11:58.825174 containerd[1429]: time="2024-09-04T17:11:58.824964345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:11:58.827608 containerd[1429]: 2024-09-04 17:11:58.685 [INFO][3851] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--78c75d8fb8--b7r8w-eth0 calico-kube-controllers-78c75d8fb8- calico-system  a8d222b3-d5db-4dcc-9a51-2bef0d400fc1 802 0 2024-09-04 17:11:37 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:78c75d8fb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s  localhost  calico-kube-controllers-78c75d8fb8-b7r8w eth0 calico-kube-controllers [] []   [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3f7da7331b9  [] []}} ContainerID="57fa94f553340dfd6a393e01557422ac84eff0f57eb648d72c5c3905ebde2ab4" Namespace="calico-system" Pod="calico-kube-controllers-78c75d8fb8-b7r8w" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78c75d8fb8--b7r8w-"
Sep  4 17:11:58.827608 containerd[1429]: 2024-09-04 17:11:58.685 [INFO][3851] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="57fa94f553340dfd6a393e01557422ac84eff0f57eb648d72c5c3905ebde2ab4" Namespace="calico-system" Pod="calico-kube-controllers-78c75d8fb8-b7r8w" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78c75d8fb8--b7r8w-eth0"
Sep  4 17:11:58.827608 containerd[1429]: 2024-09-04 17:11:58.724 [INFO][3866] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="57fa94f553340dfd6a393e01557422ac84eff0f57eb648d72c5c3905ebde2ab4" HandleID="k8s-pod-network.57fa94f553340dfd6a393e01557422ac84eff0f57eb648d72c5c3905ebde2ab4" Workload="localhost-k8s-calico--kube--controllers--78c75d8fb8--b7r8w-eth0"
Sep  4 17:11:58.827608 containerd[1429]: 2024-09-04 17:11:58.738 [INFO][3866] ipam_plugin.go 270: Auto assigning IP ContainerID="57fa94f553340dfd6a393e01557422ac84eff0f57eb648d72c5c3905ebde2ab4" HandleID="k8s-pod-network.57fa94f553340dfd6a393e01557422ac84eff0f57eb648d72c5c3905ebde2ab4" Workload="localhost-k8s-calico--kube--controllers--78c75d8fb8--b7r8w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000321100), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-78c75d8fb8-b7r8w", "timestamp":"2024-09-04 17:11:58.724764988 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Sep  4 17:11:58.827608 containerd[1429]: 2024-09-04 17:11:58.738 [INFO][3866] ipam_plugin.go 358: About to acquire host-wide IPAM lock.
Sep  4 17:11:58.827608 containerd[1429]: 2024-09-04 17:11:58.760 [INFO][3866] ipam_plugin.go 373: Acquired host-wide IPAM lock.
Sep  4 17:11:58.827608 containerd[1429]: 2024-09-04 17:11:58.760 [INFO][3866] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost'
Sep  4 17:11:58.827608 containerd[1429]: 2024-09-04 17:11:58.762 [INFO][3866] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.57fa94f553340dfd6a393e01557422ac84eff0f57eb648d72c5c3905ebde2ab4" host="localhost"
Sep  4 17:11:58.827608 containerd[1429]: 2024-09-04 17:11:58.770 [INFO][3866] ipam.go 372: Looking up existing affinities for host host="localhost"
Sep  4 17:11:58.827608 containerd[1429]: 2024-09-04 17:11:58.779 [INFO][3866] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost"
Sep  4 17:11:58.827608 containerd[1429]: 2024-09-04 17:11:58.783 [INFO][3866] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost"
Sep  4 17:11:58.827608 containerd[1429]: 2024-09-04 17:11:58.786 [INFO][3866] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost"
Sep  4 17:11:58.827608 containerd[1429]: 2024-09-04 17:11:58.786 [INFO][3866] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.57fa94f553340dfd6a393e01557422ac84eff0f57eb648d72c5c3905ebde2ab4" host="localhost"
Sep  4 17:11:58.827608 containerd[1429]: 2024-09-04 17:11:58.787 [INFO][3866] ipam.go 1685: Creating new handle: k8s-pod-network.57fa94f553340dfd6a393e01557422ac84eff0f57eb648d72c5c3905ebde2ab4
Sep  4 17:11:58.827608 containerd[1429]: 2024-09-04 17:11:58.792 [INFO][3866] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.57fa94f553340dfd6a393e01557422ac84eff0f57eb648d72c5c3905ebde2ab4" host="localhost"
Sep  4 17:11:58.827608 containerd[1429]: 2024-09-04 17:11:58.799 [INFO][3866] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.57fa94f553340dfd6a393e01557422ac84eff0f57eb648d72c5c3905ebde2ab4" host="localhost"
Sep  4 17:11:58.827608 containerd[1429]: 2024-09-04 17:11:58.799 [INFO][3866] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.57fa94f553340dfd6a393e01557422ac84eff0f57eb648d72c5c3905ebde2ab4" host="localhost"
Sep  4 17:11:58.827608 containerd[1429]: 2024-09-04 17:11:58.799 [INFO][3866] ipam_plugin.go 379: Released host-wide IPAM lock.
Sep  4 17:11:58.827608 containerd[1429]: 2024-09-04 17:11:58.799 [INFO][3866] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="57fa94f553340dfd6a393e01557422ac84eff0f57eb648d72c5c3905ebde2ab4" HandleID="k8s-pod-network.57fa94f553340dfd6a393e01557422ac84eff0f57eb648d72c5c3905ebde2ab4" Workload="localhost-k8s-calico--kube--controllers--78c75d8fb8--b7r8w-eth0"
Sep  4 17:11:58.828129 containerd[1429]: 2024-09-04 17:11:58.803 [INFO][3851] k8s.go 386: Populated endpoint ContainerID="57fa94f553340dfd6a393e01557422ac84eff0f57eb648d72c5c3905ebde2ab4" Namespace="calico-system" Pod="calico-kube-controllers-78c75d8fb8-b7r8w" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78c75d8fb8--b7r8w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78c75d8fb8--b7r8w-eth0", GenerateName:"calico-kube-controllers-78c75d8fb8-", Namespace:"calico-system", SelfLink:"", UID:"a8d222b3-d5db-4dcc-9a51-2bef0d400fc1", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78c75d8fb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-78c75d8fb8-b7r8w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3f7da7331b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Sep  4 17:11:58.828129 containerd[1429]: 2024-09-04 17:11:58.803 [INFO][3851] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="57fa94f553340dfd6a393e01557422ac84eff0f57eb648d72c5c3905ebde2ab4" Namespace="calico-system" Pod="calico-kube-controllers-78c75d8fb8-b7r8w" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78c75d8fb8--b7r8w-eth0"
Sep  4 17:11:58.828129 containerd[1429]: 2024-09-04 17:11:58.803 [INFO][3851] dataplane_linux.go 68: Setting the host side veth name to cali3f7da7331b9 ContainerID="57fa94f553340dfd6a393e01557422ac84eff0f57eb648d72c5c3905ebde2ab4" Namespace="calico-system" Pod="calico-kube-controllers-78c75d8fb8-b7r8w" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78c75d8fb8--b7r8w-eth0"
Sep  4 17:11:58.828129 containerd[1429]: 2024-09-04 17:11:58.810 [INFO][3851] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="57fa94f553340dfd6a393e01557422ac84eff0f57eb648d72c5c3905ebde2ab4" Namespace="calico-system" Pod="calico-kube-controllers-78c75d8fb8-b7r8w" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78c75d8fb8--b7r8w-eth0"
Sep  4 17:11:58.828129 containerd[1429]: 2024-09-04 17:11:58.811 [INFO][3851] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="57fa94f553340dfd6a393e01557422ac84eff0f57eb648d72c5c3905ebde2ab4" Namespace="calico-system" Pod="calico-kube-controllers-78c75d8fb8-b7r8w" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78c75d8fb8--b7r8w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78c75d8fb8--b7r8w-eth0", GenerateName:"calico-kube-controllers-78c75d8fb8-", Namespace:"calico-system", SelfLink:"", UID:"a8d222b3-d5db-4dcc-9a51-2bef0d400fc1", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78c75d8fb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"57fa94f553340dfd6a393e01557422ac84eff0f57eb648d72c5c3905ebde2ab4", Pod:"calico-kube-controllers-78c75d8fb8-b7r8w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3f7da7331b9", MAC:"3a:94:de:fe:0e:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Sep  4 17:11:58.828129 containerd[1429]: 2024-09-04 17:11:58.821 [INFO][3851] k8s.go 500: Wrote updated endpoint to datastore ContainerID="57fa94f553340dfd6a393e01557422ac84eff0f57eb648d72c5c3905ebde2ab4" Namespace="calico-system" Pod="calico-kube-controllers-78c75d8fb8-b7r8w" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78c75d8fb8--b7r8w-eth0"
Sep  4 17:11:58.848520 systemd[1]: Started cri-containerd-34d50c5a8f4bf76419fd782e0fdb29bc0aac9046ca64982bb54bf60dbab03d6d.scope - libcontainer container 34d50c5a8f4bf76419fd782e0fdb29bc0aac9046ca64982bb54bf60dbab03d6d.
Sep  4 17:11:58.852753 containerd[1429]: time="2024-09-04T17:11:58.852643669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 17:11:58.852753 containerd[1429]: time="2024-09-04T17:11:58.852697389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:11:58.852753 containerd[1429]: time="2024-09-04T17:11:58.852711029Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 17:11:58.852753 containerd[1429]: time="2024-09-04T17:11:58.852720629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:11:58.860221 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Sep  4 17:11:58.870747 systemd[1]: Started cri-containerd-57fa94f553340dfd6a393e01557422ac84eff0f57eb648d72c5c3905ebde2ab4.scope - libcontainer container 57fa94f553340dfd6a393e01557422ac84eff0f57eb648d72c5c3905ebde2ab4.
Sep  4 17:11:58.883571 containerd[1429]: time="2024-09-04T17:11:58.883526277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-c5ss6,Uid:7bb12c91-2761-4677-914c-40c5d21a7ccb,Namespace:kube-system,Attempt:1,} returns sandbox id \"34d50c5a8f4bf76419fd782e0fdb29bc0aac9046ca64982bb54bf60dbab03d6d\""
Sep  4 17:11:58.884847 kubelet[2482]: E0904 17:11:58.884633    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:58.886876 containerd[1429]: time="2024-09-04T17:11:58.886836722Z" level=info msg="CreateContainer within sandbox \"34d50c5a8f4bf76419fd782e0fdb29bc0aac9046ca64982bb54bf60dbab03d6d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Sep  4 17:11:58.887474 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Sep  4 17:11:58.911573 containerd[1429]: time="2024-09-04T17:11:58.911524321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78c75d8fb8-b7r8w,Uid:a8d222b3-d5db-4dcc-9a51-2bef0d400fc1,Namespace:calico-system,Attempt:1,} returns sandbox id \"57fa94f553340dfd6a393e01557422ac84eff0f57eb648d72c5c3905ebde2ab4\""
Sep  4 17:11:58.913834 containerd[1429]: time="2024-09-04T17:11:58.913795645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\""
Sep  4 17:11:58.915820 systemd[1]: Started sshd@8-10.0.0.33:22-10.0.0.1:56018.service - OpenSSH per-connection server daemon (10.0.0.1:56018).
Sep  4 17:11:58.952636 containerd[1429]: time="2024-09-04T17:11:58.952568185Z" level=info msg="CreateContainer within sandbox \"34d50c5a8f4bf76419fd782e0fdb29bc0aac9046ca64982bb54bf60dbab03d6d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fc2bf95116cf00bfe6bd054933ef2a870ba05c2375b70de11969d436e17ceb04\""
Sep  4 17:11:58.953393 containerd[1429]: time="2024-09-04T17:11:58.953361707Z" level=info msg="StartContainer for \"fc2bf95116cf00bfe6bd054933ef2a870ba05c2375b70de11969d436e17ceb04\""
Sep  4 17:11:58.968220 sshd[3987]: Accepted publickey for core from 10.0.0.1 port 56018 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA
Sep  4 17:11:58.968768 sshd[3987]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:11:58.973422 systemd-logind[1410]: New session 9 of user core.
Sep  4 17:11:58.982528 systemd[1]: Started cri-containerd-fc2bf95116cf00bfe6bd054933ef2a870ba05c2375b70de11969d436e17ceb04.scope - libcontainer container fc2bf95116cf00bfe6bd054933ef2a870ba05c2375b70de11969d436e17ceb04.
Sep  4 17:11:58.983403 systemd[1]: Started session-9.scope - Session 9 of User core.
Sep  4 17:11:59.013767 containerd[1429]: time="2024-09-04T17:11:59.013720281Z" level=info msg="StartContainer for \"fc2bf95116cf00bfe6bd054933ef2a870ba05c2375b70de11969d436e17ceb04\" returns successfully"
Sep  4 17:11:59.235798 sshd[3987]: pam_unix(sshd:session): session closed for user core
Sep  4 17:11:59.239301 systemd[1]: sshd@8-10.0.0.33:22-10.0.0.1:56018.service: Deactivated successfully.
Sep  4 17:11:59.241416 systemd[1]: session-9.scope: Deactivated successfully.
Sep  4 17:11:59.242147 systemd-logind[1410]: Session 9 logged out. Waiting for processes to exit.
Sep  4 17:11:59.243017 systemd-logind[1410]: Removed session 9.
Sep  4 17:11:59.368731 containerd[1429]: time="2024-09-04T17:11:59.368685827Z" level=info msg="StopPodSandbox for \"d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0\""
Sep  4 17:11:59.474833 containerd[1429]: 2024-09-04 17:11:59.436 [INFO][4060] k8s.go 608: Cleaning up netns ContainerID="d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0"
Sep  4 17:11:59.474833 containerd[1429]: 2024-09-04 17:11:59.436 [INFO][4060] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0" iface="eth0" netns="/var/run/netns/cni-e5a0f199-be30-34a8-d4c7-9db4b4c535b1"
Sep  4 17:11:59.474833 containerd[1429]: 2024-09-04 17:11:59.436 [INFO][4060] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0" iface="eth0" netns="/var/run/netns/cni-e5a0f199-be30-34a8-d4c7-9db4b4c535b1"
Sep  4 17:11:59.474833 containerd[1429]: 2024-09-04 17:11:59.437 [INFO][4060] dataplane_linux.go 568: Workload's veth was already gone.  Nothing to do. ContainerID="d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0" iface="eth0" netns="/var/run/netns/cni-e5a0f199-be30-34a8-d4c7-9db4b4c535b1"
Sep  4 17:11:59.474833 containerd[1429]: 2024-09-04 17:11:59.437 [INFO][4060] k8s.go 615: Releasing IP address(es) ContainerID="d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0"
Sep  4 17:11:59.474833 containerd[1429]: 2024-09-04 17:11:59.437 [INFO][4060] utils.go 188: Calico CNI releasing IP address ContainerID="d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0"
Sep  4 17:11:59.474833 containerd[1429]: 2024-09-04 17:11:59.461 [INFO][4067] ipam_plugin.go 417: Releasing address using handleID ContainerID="d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0" HandleID="k8s-pod-network.d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0" Workload="localhost-k8s-csi--node--driver--chwdn-eth0"
Sep  4 17:11:59.474833 containerd[1429]: 2024-09-04 17:11:59.461 [INFO][4067] ipam_plugin.go 358: About to acquire host-wide IPAM lock.
Sep  4 17:11:59.474833 containerd[1429]: 2024-09-04 17:11:59.461 [INFO][4067] ipam_plugin.go 373: Acquired host-wide IPAM lock.
Sep  4 17:11:59.474833 containerd[1429]: 2024-09-04 17:11:59.469 [WARNING][4067] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0" HandleID="k8s-pod-network.d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0" Workload="localhost-k8s-csi--node--driver--chwdn-eth0"
Sep  4 17:11:59.474833 containerd[1429]: 2024-09-04 17:11:59.469 [INFO][4067] ipam_plugin.go 445: Releasing address using workloadID ContainerID="d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0" HandleID="k8s-pod-network.d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0" Workload="localhost-k8s-csi--node--driver--chwdn-eth0"
Sep  4 17:11:59.474833 containerd[1429]: 2024-09-04 17:11:59.471 [INFO][4067] ipam_plugin.go 379: Released host-wide IPAM lock.
Sep  4 17:11:59.474833 containerd[1429]: 2024-09-04 17:11:59.472 [INFO][4060] k8s.go 621: Teardown processing complete. ContainerID="d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0"
Sep  4 17:11:59.475646 containerd[1429]: time="2024-09-04T17:11:59.474983070Z" level=info msg="TearDown network for sandbox \"d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0\" successfully"
Sep  4 17:11:59.475646 containerd[1429]: time="2024-09-04T17:11:59.475010830Z" level=info msg="StopPodSandbox for \"d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0\" returns successfully"
Sep  4 17:11:59.475782 containerd[1429]: time="2024-09-04T17:11:59.475702591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-chwdn,Uid:ad9729de-5f0a-425d-b5ea-b886ce65bfc9,Namespace:calico-system,Attempt:1,}"
Sep  4 17:11:59.542535 kubelet[2482]: E0904 17:11:59.542411    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:11:59.553400 kubelet[2482]: I0904 17:11:59.553361    2482 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-c5ss6" podStartSLOduration=28.553304191 podCreationTimestamp="2024-09-04 17:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:11:59.551986749 +0000 UTC m=+43.296245138" watchObservedRunningTime="2024-09-04 17:11:59.553304191 +0000 UTC m=+43.297562580"
Sep  4 17:11:59.599497 systemd-networkd[1373]: cali8161346a695: Link UP
Sep  4 17:11:59.600064 systemd-networkd[1373]: cali8161346a695: Gained carrier
Sep  4 17:11:59.605993 systemd[1]: run-netns-cni\x2de5a0f199\x2dbe30\x2d34a8\x2dd4c7\x2d9db4b4c535b1.mount: Deactivated successfully.
Sep  4 17:11:59.623526 containerd[1429]: 2024-09-04 17:11:59.518 [INFO][4076] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--chwdn-eth0 csi-node-driver- calico-system  ad9729de-5f0a-425d-b5ea-b886ce65bfc9 818 0 2024-09-04 17:11:37 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s  localhost  csi-node-driver-chwdn eth0 default [] []   [kns.calico-system ksa.calico-system.default] cali8161346a695  [] []}} ContainerID="2b7403511ee9ff68d1bba48f6af0f524c1d4540149227d10fbf8d5e8a3b747b5" Namespace="calico-system" Pod="csi-node-driver-chwdn" WorkloadEndpoint="localhost-k8s-csi--node--driver--chwdn-"
Sep  4 17:11:59.623526 containerd[1429]: 2024-09-04 17:11:59.518 [INFO][4076] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2b7403511ee9ff68d1bba48f6af0f524c1d4540149227d10fbf8d5e8a3b747b5" Namespace="calico-system" Pod="csi-node-driver-chwdn" WorkloadEndpoint="localhost-k8s-csi--node--driver--chwdn-eth0"
Sep  4 17:11:59.623526 containerd[1429]: 2024-09-04 17:11:59.547 [INFO][4089] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2b7403511ee9ff68d1bba48f6af0f524c1d4540149227d10fbf8d5e8a3b747b5" HandleID="k8s-pod-network.2b7403511ee9ff68d1bba48f6af0f524c1d4540149227d10fbf8d5e8a3b747b5" Workload="localhost-k8s-csi--node--driver--chwdn-eth0"
Sep  4 17:11:59.623526 containerd[1429]: 2024-09-04 17:11:59.563 [INFO][4089] ipam_plugin.go 270: Auto assigning IP ContainerID="2b7403511ee9ff68d1bba48f6af0f524c1d4540149227d10fbf8d5e8a3b747b5" HandleID="k8s-pod-network.2b7403511ee9ff68d1bba48f6af0f524c1d4540149227d10fbf8d5e8a3b747b5" Workload="localhost-k8s-csi--node--driver--chwdn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f2d50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-chwdn", "timestamp":"2024-09-04 17:11:59.547020381 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Sep  4 17:11:59.623526 containerd[1429]: 2024-09-04 17:11:59.563 [INFO][4089] ipam_plugin.go 358: About to acquire host-wide IPAM lock.
Sep  4 17:11:59.623526 containerd[1429]: 2024-09-04 17:11:59.563 [INFO][4089] ipam_plugin.go 373: Acquired host-wide IPAM lock.
Sep  4 17:11:59.623526 containerd[1429]: 2024-09-04 17:11:59.563 [INFO][4089] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost'
Sep  4 17:11:59.623526 containerd[1429]: 2024-09-04 17:11:59.565 [INFO][4089] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2b7403511ee9ff68d1bba48f6af0f524c1d4540149227d10fbf8d5e8a3b747b5" host="localhost"
Sep  4 17:11:59.623526 containerd[1429]: 2024-09-04 17:11:59.573 [INFO][4089] ipam.go 372: Looking up existing affinities for host host="localhost"
Sep  4 17:11:59.623526 containerd[1429]: 2024-09-04 17:11:59.577 [INFO][4089] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost"
Sep  4 17:11:59.623526 containerd[1429]: 2024-09-04 17:11:59.579 [INFO][4089] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost"
Sep  4 17:11:59.623526 containerd[1429]: 2024-09-04 17:11:59.582 [INFO][4089] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost"
Sep  4 17:11:59.623526 containerd[1429]: 2024-09-04 17:11:59.582 [INFO][4089] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2b7403511ee9ff68d1bba48f6af0f524c1d4540149227d10fbf8d5e8a3b747b5" host="localhost"
Sep  4 17:11:59.623526 containerd[1429]: 2024-09-04 17:11:59.584 [INFO][4089] ipam.go 1685: Creating new handle: k8s-pod-network.2b7403511ee9ff68d1bba48f6af0f524c1d4540149227d10fbf8d5e8a3b747b5
Sep  4 17:11:59.623526 containerd[1429]: 2024-09-04 17:11:59.587 [INFO][4089] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2b7403511ee9ff68d1bba48f6af0f524c1d4540149227d10fbf8d5e8a3b747b5" host="localhost"
Sep  4 17:11:59.623526 containerd[1429]: 2024-09-04 17:11:59.592 [INFO][4089] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.2b7403511ee9ff68d1bba48f6af0f524c1d4540149227d10fbf8d5e8a3b747b5" host="localhost"
Sep  4 17:11:59.623526 containerd[1429]: 2024-09-04 17:11:59.592 [INFO][4089] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.2b7403511ee9ff68d1bba48f6af0f524c1d4540149227d10fbf8d5e8a3b747b5" host="localhost"
Sep  4 17:11:59.623526 containerd[1429]: 2024-09-04 17:11:59.592 [INFO][4089] ipam_plugin.go 379: Released host-wide IPAM lock.
Sep  4 17:11:59.623526 containerd[1429]: 2024-09-04 17:11:59.592 [INFO][4089] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="2b7403511ee9ff68d1bba48f6af0f524c1d4540149227d10fbf8d5e8a3b747b5" HandleID="k8s-pod-network.2b7403511ee9ff68d1bba48f6af0f524c1d4540149227d10fbf8d5e8a3b747b5" Workload="localhost-k8s-csi--node--driver--chwdn-eth0"
Sep  4 17:11:59.625933 containerd[1429]: 2024-09-04 17:11:59.596 [INFO][4076] k8s.go 386: Populated endpoint ContainerID="2b7403511ee9ff68d1bba48f6af0f524c1d4540149227d10fbf8d5e8a3b747b5" Namespace="calico-system" Pod="csi-node-driver-chwdn" WorkloadEndpoint="localhost-k8s-csi--node--driver--chwdn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--chwdn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ad9729de-5f0a-425d-b5ea-b886ce65bfc9", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-chwdn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali8161346a695", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Sep  4 17:11:59.625933 containerd[1429]: 2024-09-04 17:11:59.596 [INFO][4076] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="2b7403511ee9ff68d1bba48f6af0f524c1d4540149227d10fbf8d5e8a3b747b5" Namespace="calico-system" Pod="csi-node-driver-chwdn" WorkloadEndpoint="localhost-k8s-csi--node--driver--chwdn-eth0"
Sep  4 17:11:59.625933 containerd[1429]: 2024-09-04 17:11:59.596 [INFO][4076] dataplane_linux.go 68: Setting the host side veth name to cali8161346a695 ContainerID="2b7403511ee9ff68d1bba48f6af0f524c1d4540149227d10fbf8d5e8a3b747b5" Namespace="calico-system" Pod="csi-node-driver-chwdn" WorkloadEndpoint="localhost-k8s-csi--node--driver--chwdn-eth0"
Sep  4 17:11:59.625933 containerd[1429]: 2024-09-04 17:11:59.599 [INFO][4076] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="2b7403511ee9ff68d1bba48f6af0f524c1d4540149227d10fbf8d5e8a3b747b5" Namespace="calico-system" Pod="csi-node-driver-chwdn" WorkloadEndpoint="localhost-k8s-csi--node--driver--chwdn-eth0"
Sep  4 17:11:59.625933 containerd[1429]: 2024-09-04 17:11:59.599 [INFO][4076] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2b7403511ee9ff68d1bba48f6af0f524c1d4540149227d10fbf8d5e8a3b747b5" Namespace="calico-system" Pod="csi-node-driver-chwdn" WorkloadEndpoint="localhost-k8s-csi--node--driver--chwdn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--chwdn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ad9729de-5f0a-425d-b5ea-b886ce65bfc9", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2b7403511ee9ff68d1bba48f6af0f524c1d4540149227d10fbf8d5e8a3b747b5", Pod:"csi-node-driver-chwdn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali8161346a695", MAC:"8e:e3:62:ad:a6:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Sep  4 17:11:59.625933 containerd[1429]: 2024-09-04 17:11:59.612 [INFO][4076] k8s.go 500: Wrote updated endpoint to datastore ContainerID="2b7403511ee9ff68d1bba48f6af0f524c1d4540149227d10fbf8d5e8a3b747b5" Namespace="calico-system" Pod="csi-node-driver-chwdn" WorkloadEndpoint="localhost-k8s-csi--node--driver--chwdn-eth0"
Sep  4 17:11:59.722175 containerd[1429]: time="2024-09-04T17:11:59.722066490Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 17:11:59.722175 containerd[1429]: time="2024-09-04T17:11:59.722130450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:11:59.722175 containerd[1429]: time="2024-09-04T17:11:59.722149450Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 17:11:59.722175 containerd[1429]: time="2024-09-04T17:11:59.722162650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:11:59.756627 systemd[1]: Started cri-containerd-2b7403511ee9ff68d1bba48f6af0f524c1d4540149227d10fbf8d5e8a3b747b5.scope - libcontainer container 2b7403511ee9ff68d1bba48f6af0f524c1d4540149227d10fbf8d5e8a3b747b5.
Sep  4 17:11:59.768133 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Sep  4 17:11:59.783331 containerd[1429]: time="2024-09-04T17:11:59.780779621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-chwdn,Uid:ad9729de-5f0a-425d-b5ea-b886ce65bfc9,Namespace:calico-system,Attempt:1,} returns sandbox id \"2b7403511ee9ff68d1bba48f6af0f524c1d4540149227d10fbf8d5e8a3b747b5\""
Sep  4 17:12:00.024645 systemd-networkd[1373]: calid40c5f4a50c: Gained IPv6LL
Sep  4 17:12:00.472412 containerd[1429]: time="2024-09-04T17:12:00.471577589Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:12:00.472412 containerd[1429]: time="2024-09-04T17:12:00.472299150Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=31361753"
Sep  4 17:12:00.472990 containerd[1429]: time="2024-09-04T17:12:00.472946671Z" level=info msg="ImageCreate event name:\"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:12:00.475516 containerd[1429]: time="2024-09-04T17:12:00.475480075Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:12:00.476974 containerd[1429]: time="2024-09-04T17:12:00.476930877Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"32729240\" in 1.563092232s"
Sep  4 17:12:00.477017 containerd[1429]: time="2024-09-04T17:12:00.476986797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\""
Sep  4 17:12:00.477744 containerd[1429]: time="2024-09-04T17:12:00.477560078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\""
Sep  4 17:12:00.485868 containerd[1429]: time="2024-09-04T17:12:00.485820531Z" level=info msg="CreateContainer within sandbox \"57fa94f553340dfd6a393e01557422ac84eff0f57eb648d72c5c3905ebde2ab4\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}"
Sep  4 17:12:00.506449 containerd[1429]: time="2024-09-04T17:12:00.506305162Z" level=info msg="CreateContainer within sandbox \"57fa94f553340dfd6a393e01557422ac84eff0f57eb648d72c5c3905ebde2ab4\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d952cadae373004420a3db3dc67dfb892dccacaf3f829e9b7e8af51e80d17fa2\""
Sep  4 17:12:00.507065 containerd[1429]: time="2024-09-04T17:12:00.506951722Z" level=info msg="StartContainer for \"d952cadae373004420a3db3dc67dfb892dccacaf3f829e9b7e8af51e80d17fa2\""
Sep  4 17:12:00.546479 systemd[1]: Started cri-containerd-d952cadae373004420a3db3dc67dfb892dccacaf3f829e9b7e8af51e80d17fa2.scope - libcontainer container d952cadae373004420a3db3dc67dfb892dccacaf3f829e9b7e8af51e80d17fa2.
Sep  4 17:12:00.548426 kubelet[2482]: E0904 17:12:00.548403    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:12:00.587936 containerd[1429]: time="2024-09-04T17:12:00.587888725Z" level=info msg="StartContainer for \"d952cadae373004420a3db3dc67dfb892dccacaf3f829e9b7e8af51e80d17fa2\" returns successfully"
Sep  4 17:12:00.857517 systemd-networkd[1373]: cali3f7da7331b9: Gained IPv6LL
Sep  4 17:12:01.304477 systemd-networkd[1373]: cali8161346a695: Gained IPv6LL
Sep  4 17:12:01.552197 kubelet[2482]: E0904 17:12:01.552154    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:12:01.588066 kubelet[2482]: I0904 17:12:01.587961    2482 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-78c75d8fb8-b7r8w" podStartSLOduration=23.021032819 podCreationTimestamp="2024-09-04 17:11:37 +0000 UTC" firstStartedPulling="2024-09-04 17:11:58.912689243 +0000 UTC m=+42.656947632" lastFinishedPulling="2024-09-04 17:12:00.477339118 +0000 UTC m=+44.221597507" observedRunningTime="2024-09-04 17:12:01.584477372 +0000 UTC m=+45.328735761" watchObservedRunningTime="2024-09-04 17:12:01.585682694 +0000 UTC m=+45.329941083"
Sep  4 17:12:01.714290 containerd[1429]: time="2024-09-04T17:12:01.713542563Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:12:01.717191 containerd[1429]: time="2024-09-04T17:12:01.717157368Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7211060"
Sep  4 17:12:01.720474 containerd[1429]: time="2024-09-04T17:12:01.720431693Z" level=info msg="ImageCreate event name:\"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:12:01.727562 containerd[1429]: time="2024-09-04T17:12:01.727517504Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:12:01.728100 containerd[1429]: time="2024-09-04T17:12:01.728060945Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"8578579\" in 1.250465267s"
Sep  4 17:12:01.728100 containerd[1429]: time="2024-09-04T17:12:01.728098425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\""
Sep  4 17:12:01.730755 containerd[1429]: time="2024-09-04T17:12:01.730706468Z" level=info msg="CreateContainer within sandbox \"2b7403511ee9ff68d1bba48f6af0f524c1d4540149227d10fbf8d5e8a3b747b5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}"
Sep  4 17:12:01.818115 containerd[1429]: time="2024-09-04T17:12:01.818066158Z" level=info msg="CreateContainer within sandbox \"2b7403511ee9ff68d1bba48f6af0f524c1d4540149227d10fbf8d5e8a3b747b5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"bed7fcaa66845fbf4d951ed49212b3cb0ec94d41a026e155e1bc5274c75d867d\""
Sep  4 17:12:01.818996 containerd[1429]: time="2024-09-04T17:12:01.818828839Z" level=info msg="StartContainer for \"bed7fcaa66845fbf4d951ed49212b3cb0ec94d41a026e155e1bc5274c75d867d\""
Sep  4 17:12:01.859525 systemd[1]: Started cri-containerd-bed7fcaa66845fbf4d951ed49212b3cb0ec94d41a026e155e1bc5274c75d867d.scope - libcontainer container bed7fcaa66845fbf4d951ed49212b3cb0ec94d41a026e155e1bc5274c75d867d.
Sep  4 17:12:02.101713 containerd[1429]: time="2024-09-04T17:12:02.101631295Z" level=info msg="StartContainer for \"bed7fcaa66845fbf4d951ed49212b3cb0ec94d41a026e155e1bc5274c75d867d\" returns successfully"
Sep  4 17:12:02.103006 containerd[1429]: time="2024-09-04T17:12:02.102920577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\""
Sep  4 17:12:02.368987 containerd[1429]: time="2024-09-04T17:12:02.368468644Z" level=info msg="StopPodSandbox for \"9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598\""
Sep  4 17:12:02.481481 containerd[1429]: 2024-09-04 17:12:02.443 [INFO][4282] k8s.go 608: Cleaning up netns ContainerID="9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598"
Sep  4 17:12:02.481481 containerd[1429]: 2024-09-04 17:12:02.443 [INFO][4282] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598" iface="eth0" netns="/var/run/netns/cni-26f74a06-87d3-1b21-e990-d93fa7af7b33"
Sep  4 17:12:02.481481 containerd[1429]: 2024-09-04 17:12:02.443 [INFO][4282] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598" iface="eth0" netns="/var/run/netns/cni-26f74a06-87d3-1b21-e990-d93fa7af7b33"
Sep  4 17:12:02.481481 containerd[1429]: 2024-09-04 17:12:02.443 [INFO][4282] dataplane_linux.go 568: Workload's veth was already gone.  Nothing to do. ContainerID="9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598" iface="eth0" netns="/var/run/netns/cni-26f74a06-87d3-1b21-e990-d93fa7af7b33"
Sep  4 17:12:02.481481 containerd[1429]: 2024-09-04 17:12:02.444 [INFO][4282] k8s.go 615: Releasing IP address(es) ContainerID="9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598"
Sep  4 17:12:02.481481 containerd[1429]: 2024-09-04 17:12:02.444 [INFO][4282] utils.go 188: Calico CNI releasing IP address ContainerID="9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598"
Sep  4 17:12:02.481481 containerd[1429]: 2024-09-04 17:12:02.466 [INFO][4289] ipam_plugin.go 417: Releasing address using handleID ContainerID="9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598" HandleID="k8s-pod-network.9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598" Workload="localhost-k8s-coredns--5dd5756b68--kx782-eth0"
Sep  4 17:12:02.481481 containerd[1429]: 2024-09-04 17:12:02.466 [INFO][4289] ipam_plugin.go 358: About to acquire host-wide IPAM lock.
Sep  4 17:12:02.481481 containerd[1429]: 2024-09-04 17:12:02.467 [INFO][4289] ipam_plugin.go 373: Acquired host-wide IPAM lock.
Sep  4 17:12:02.481481 containerd[1429]: 2024-09-04 17:12:02.476 [WARNING][4289] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598" HandleID="k8s-pod-network.9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598" Workload="localhost-k8s-coredns--5dd5756b68--kx782-eth0"
Sep  4 17:12:02.481481 containerd[1429]: 2024-09-04 17:12:02.476 [INFO][4289] ipam_plugin.go 445: Releasing address using workloadID ContainerID="9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598" HandleID="k8s-pod-network.9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598" Workload="localhost-k8s-coredns--5dd5756b68--kx782-eth0"
Sep  4 17:12:02.481481 containerd[1429]: 2024-09-04 17:12:02.478 [INFO][4289] ipam_plugin.go 379: Released host-wide IPAM lock.
Sep  4 17:12:02.481481 containerd[1429]: 2024-09-04 17:12:02.479 [INFO][4282] k8s.go 621: Teardown processing complete. ContainerID="9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598"
Sep  4 17:12:02.482142 containerd[1429]: time="2024-09-04T17:12:02.481886969Z" level=info msg="TearDown network for sandbox \"9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598\" successfully"
Sep  4 17:12:02.482142 containerd[1429]: time="2024-09-04T17:12:02.481939889Z" level=info msg="StopPodSandbox for \"9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598\" returns successfully"
Sep  4 17:12:02.484227 kubelet[2482]: E0904 17:12:02.484189    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:12:02.485425 containerd[1429]: time="2024-09-04T17:12:02.484774013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-kx782,Uid:952dfd7d-4c90-4dae-9fa9-05a48a9c20ce,Namespace:kube-system,Attempt:1,}"
Sep  4 17:12:02.637079 systemd-networkd[1373]: cali84ff708ae6f: Link UP
Sep  4 17:12:02.637242 systemd-networkd[1373]: cali84ff708ae6f: Gained carrier
Sep  4 17:12:02.652440 containerd[1429]: 2024-09-04 17:12:02.541 [INFO][4296] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--kx782-eth0 coredns-5dd5756b68- kube-system  952dfd7d-4c90-4dae-9fa9-05a48a9c20ce 873 0 2024-09-04 17:11:31 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s  localhost  coredns-5dd5756b68-kx782 eth0 coredns [] []   [kns.kube-system ksa.kube-system.coredns] cali84ff708ae6f  [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="6a57e999630e5b168a58e72e06e4d4c87d30383d816c8ec5812764f19889fcaf" Namespace="kube-system" Pod="coredns-5dd5756b68-kx782" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--kx782-"
Sep  4 17:12:02.652440 containerd[1429]: 2024-09-04 17:12:02.542 [INFO][4296] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6a57e999630e5b168a58e72e06e4d4c87d30383d816c8ec5812764f19889fcaf" Namespace="kube-system" Pod="coredns-5dd5756b68-kx782" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--kx782-eth0"
Sep  4 17:12:02.652440 containerd[1429]: 2024-09-04 17:12:02.582 [INFO][4317] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a57e999630e5b168a58e72e06e4d4c87d30383d816c8ec5812764f19889fcaf" HandleID="k8s-pod-network.6a57e999630e5b168a58e72e06e4d4c87d30383d816c8ec5812764f19889fcaf" Workload="localhost-k8s-coredns--5dd5756b68--kx782-eth0"
Sep  4 17:12:02.652440 containerd[1429]: 2024-09-04 17:12:02.597 [INFO][4317] ipam_plugin.go 270: Auto assigning IP ContainerID="6a57e999630e5b168a58e72e06e4d4c87d30383d816c8ec5812764f19889fcaf" HandleID="k8s-pod-network.6a57e999630e5b168a58e72e06e4d4c87d30383d816c8ec5812764f19889fcaf" Workload="localhost-k8s-coredns--5dd5756b68--kx782-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003442e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-kx782", "timestamp":"2024-09-04 17:12:02.582462515 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Sep  4 17:12:02.652440 containerd[1429]: 2024-09-04 17:12:02.597 [INFO][4317] ipam_plugin.go 358: About to acquire host-wide IPAM lock.
Sep  4 17:12:02.652440 containerd[1429]: 2024-09-04 17:12:02.597 [INFO][4317] ipam_plugin.go 373: Acquired host-wide IPAM lock.
Sep  4 17:12:02.652440 containerd[1429]: 2024-09-04 17:12:02.597 [INFO][4317] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost'
Sep  4 17:12:02.652440 containerd[1429]: 2024-09-04 17:12:02.602 [INFO][4317] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6a57e999630e5b168a58e72e06e4d4c87d30383d816c8ec5812764f19889fcaf" host="localhost"
Sep  4 17:12:02.652440 containerd[1429]: 2024-09-04 17:12:02.607 [INFO][4317] ipam.go 372: Looking up existing affinities for host host="localhost"
Sep  4 17:12:02.652440 containerd[1429]: 2024-09-04 17:12:02.611 [INFO][4317] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost"
Sep  4 17:12:02.652440 containerd[1429]: 2024-09-04 17:12:02.613 [INFO][4317] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost"
Sep  4 17:12:02.652440 containerd[1429]: 2024-09-04 17:12:02.616 [INFO][4317] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost"
Sep  4 17:12:02.652440 containerd[1429]: 2024-09-04 17:12:02.616 [INFO][4317] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6a57e999630e5b168a58e72e06e4d4c87d30383d816c8ec5812764f19889fcaf" host="localhost"
Sep  4 17:12:02.652440 containerd[1429]: 2024-09-04 17:12:02.618 [INFO][4317] ipam.go 1685: Creating new handle: k8s-pod-network.6a57e999630e5b168a58e72e06e4d4c87d30383d816c8ec5812764f19889fcaf
Sep  4 17:12:02.652440 containerd[1429]: 2024-09-04 17:12:02.622 [INFO][4317] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6a57e999630e5b168a58e72e06e4d4c87d30383d816c8ec5812764f19889fcaf" host="localhost"
Sep  4 17:12:02.652440 containerd[1429]: 2024-09-04 17:12:02.630 [INFO][4317] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.6a57e999630e5b168a58e72e06e4d4c87d30383d816c8ec5812764f19889fcaf" host="localhost"
Sep  4 17:12:02.652440 containerd[1429]: 2024-09-04 17:12:02.630 [INFO][4317] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.6a57e999630e5b168a58e72e06e4d4c87d30383d816c8ec5812764f19889fcaf" host="localhost"
Sep  4 17:12:02.652440 containerd[1429]: 2024-09-04 17:12:02.630 [INFO][4317] ipam_plugin.go 379: Released host-wide IPAM lock.
Sep  4 17:12:02.652440 containerd[1429]: 2024-09-04 17:12:02.630 [INFO][4317] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="6a57e999630e5b168a58e72e06e4d4c87d30383d816c8ec5812764f19889fcaf" HandleID="k8s-pod-network.6a57e999630e5b168a58e72e06e4d4c87d30383d816c8ec5812764f19889fcaf" Workload="localhost-k8s-coredns--5dd5756b68--kx782-eth0"
Sep  4 17:12:02.653595 containerd[1429]: 2024-09-04 17:12:02.632 [INFO][4296] k8s.go 386: Populated endpoint ContainerID="6a57e999630e5b168a58e72e06e4d4c87d30383d816c8ec5812764f19889fcaf" Namespace="kube-system" Pod="coredns-5dd5756b68-kx782" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--kx782-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--kx782-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"952dfd7d-4c90-4dae-9fa9-05a48a9c20ce", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 31, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-kx782", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84ff708ae6f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Sep  4 17:12:02.653595 containerd[1429]: 2024-09-04 17:12:02.633 [INFO][4296] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="6a57e999630e5b168a58e72e06e4d4c87d30383d816c8ec5812764f19889fcaf" Namespace="kube-system" Pod="coredns-5dd5756b68-kx782" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--kx782-eth0"
Sep  4 17:12:02.653595 containerd[1429]: 2024-09-04 17:12:02.633 [INFO][4296] dataplane_linux.go 68: Setting the host side veth name to cali84ff708ae6f ContainerID="6a57e999630e5b168a58e72e06e4d4c87d30383d816c8ec5812764f19889fcaf" Namespace="kube-system" Pod="coredns-5dd5756b68-kx782" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--kx782-eth0"
Sep  4 17:12:02.653595 containerd[1429]: 2024-09-04 17:12:02.636 [INFO][4296] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="6a57e999630e5b168a58e72e06e4d4c87d30383d816c8ec5812764f19889fcaf" Namespace="kube-system" Pod="coredns-5dd5756b68-kx782" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--kx782-eth0"
Sep  4 17:12:02.653595 containerd[1429]: 2024-09-04 17:12:02.638 [INFO][4296] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6a57e999630e5b168a58e72e06e4d4c87d30383d816c8ec5812764f19889fcaf" Namespace="kube-system" Pod="coredns-5dd5756b68-kx782" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--kx782-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--kx782-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"952dfd7d-4c90-4dae-9fa9-05a48a9c20ce", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 31, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6a57e999630e5b168a58e72e06e4d4c87d30383d816c8ec5812764f19889fcaf", Pod:"coredns-5dd5756b68-kx782", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84ff708ae6f", MAC:"da:b8:64:4d:c4:d4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Sep  4 17:12:02.653595 containerd[1429]: 2024-09-04 17:12:02.646 [INFO][4296] k8s.go 500: Wrote updated endpoint to datastore ContainerID="6a57e999630e5b168a58e72e06e4d4c87d30383d816c8ec5812764f19889fcaf" Namespace="kube-system" Pod="coredns-5dd5756b68-kx782" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--kx782-eth0"
Sep  4 17:12:02.686151 containerd[1429]: time="2024-09-04T17:12:02.685428825Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 17:12:02.686151 containerd[1429]: time="2024-09-04T17:12:02.685826425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:12:02.686151 containerd[1429]: time="2024-09-04T17:12:02.685851305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 17:12:02.686151 containerd[1429]: time="2024-09-04T17:12:02.685861425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:12:02.711523 systemd[1]: Started cri-containerd-6a57e999630e5b168a58e72e06e4d4c87d30383d816c8ec5812764f19889fcaf.scope - libcontainer container 6a57e999630e5b168a58e72e06e4d4c87d30383d816c8ec5812764f19889fcaf.
Sep  4 17:12:02.726397 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Sep  4 17:12:02.746363 containerd[1429]: time="2024-09-04T17:12:02.746303753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-kx782,Uid:952dfd7d-4c90-4dae-9fa9-05a48a9c20ce,Namespace:kube-system,Attempt:1,} returns sandbox id \"6a57e999630e5b168a58e72e06e4d4c87d30383d816c8ec5812764f19889fcaf\""
Sep  4 17:12:02.749937 kubelet[2482]: E0904 17:12:02.749484    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:12:02.758196 containerd[1429]: time="2024-09-04T17:12:02.757373130Z" level=info msg="CreateContainer within sandbox \"6a57e999630e5b168a58e72e06e4d4c87d30383d816c8ec5812764f19889fcaf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Sep  4 17:12:02.781541 systemd[1]: run-netns-cni\x2d26f74a06\x2d87d3\x2d1b21\x2de990\x2dd93fa7af7b33.mount: Deactivated successfully.
Sep  4 17:12:02.784860 containerd[1429]: time="2024-09-04T17:12:02.784804649Z" level=info msg="CreateContainer within sandbox \"6a57e999630e5b168a58e72e06e4d4c87d30383d816c8ec5812764f19889fcaf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"82a6462baf87e61f5bcee6a23dd424fadd934ae6cccf06155e1cbe0080b24ccd\""
Sep  4 17:12:02.785455 containerd[1429]: time="2024-09-04T17:12:02.785420930Z" level=info msg="StartContainer for \"82a6462baf87e61f5bcee6a23dd424fadd934ae6cccf06155e1cbe0080b24ccd\""
Sep  4 17:12:02.785985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2180068210.mount: Deactivated successfully.
Sep  4 17:12:02.824560 systemd[1]: Started cri-containerd-82a6462baf87e61f5bcee6a23dd424fadd934ae6cccf06155e1cbe0080b24ccd.scope - libcontainer container 82a6462baf87e61f5bcee6a23dd424fadd934ae6cccf06155e1cbe0080b24ccd.
Sep  4 17:12:02.853509 containerd[1429]: time="2024-09-04T17:12:02.852868109Z" level=info msg="StartContainer for \"82a6462baf87e61f5bcee6a23dd424fadd934ae6cccf06155e1cbe0080b24ccd\" returns successfully"
Sep  4 17:12:03.485539 containerd[1429]: time="2024-09-04T17:12:03.485467098Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:12:03.486408 containerd[1429]: time="2024-09-04T17:12:03.486374979Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12116870"
Sep  4 17:12:03.490843 containerd[1429]: time="2024-09-04T17:12:03.490802825Z" level=info msg="ImageCreate event name:\"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:12:03.493163 containerd[1429]: time="2024-09-04T17:12:03.493124468Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:12:03.493774 containerd[1429]: time="2024-09-04T17:12:03.493733909Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"13484341\" in 1.390768052s"
Sep  4 17:12:03.493819 containerd[1429]: time="2024-09-04T17:12:03.493774829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\""
Sep  4 17:12:03.495513 containerd[1429]: time="2024-09-04T17:12:03.495447592Z" level=info msg="CreateContainer within sandbox \"2b7403511ee9ff68d1bba48f6af0f524c1d4540149227d10fbf8d5e8a3b747b5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}"
Sep  4 17:12:03.507115 containerd[1429]: time="2024-09-04T17:12:03.507065488Z" level=info msg="CreateContainer within sandbox \"2b7403511ee9ff68d1bba48f6af0f524c1d4540149227d10fbf8d5e8a3b747b5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"fc5f30c6a2a7c63830d235f237e19b64d3ed51b4284ef4dec66dac298865ef61\""
Sep  4 17:12:03.507590 containerd[1429]: time="2024-09-04T17:12:03.507559969Z" level=info msg="StartContainer for \"fc5f30c6a2a7c63830d235f237e19b64d3ed51b4284ef4dec66dac298865ef61\""
Sep  4 17:12:03.536512 systemd[1]: Started cri-containerd-fc5f30c6a2a7c63830d235f237e19b64d3ed51b4284ef4dec66dac298865ef61.scope - libcontainer container fc5f30c6a2a7c63830d235f237e19b64d3ed51b4284ef4dec66dac298865ef61.
Sep  4 17:12:03.561275 kubelet[2482]: E0904 17:12:03.561243    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:12:03.569756 containerd[1429]: time="2024-09-04T17:12:03.569700018Z" level=info msg="StartContainer for \"fc5f30c6a2a7c63830d235f237e19b64d3ed51b4284ef4dec66dac298865ef61\" returns successfully"
Sep  4 17:12:03.576999 kubelet[2482]: I0904 17:12:03.576690    2482 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-kx782" podStartSLOduration=32.576641268 podCreationTimestamp="2024-09-04 17:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:12:03.574917586 +0000 UTC m=+47.319175975" watchObservedRunningTime="2024-09-04 17:12:03.576641268 +0000 UTC m=+47.320899657"
Sep  4 17:12:04.056513 systemd-networkd[1373]: cali84ff708ae6f: Gained IPv6LL
Sep  4 17:12:04.247380 systemd[1]: Started sshd@9-10.0.0.33:22-10.0.0.1:34220.service - OpenSSH per-connection server daemon (10.0.0.1:34220).
Sep  4 17:12:04.294215 sshd[4468]: Accepted publickey for core from 10.0.0.1 port 34220 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA
Sep  4 17:12:04.295810 sshd[4468]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:12:04.299869 systemd-logind[1410]: New session 10 of user core.
Sep  4 17:12:04.313490 systemd[1]: Started session-10.scope - Session 10 of User core.
Sep  4 17:12:04.470750 kubelet[2482]: I0904 17:12:04.470666    2482 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0
Sep  4 17:12:04.470750 kubelet[2482]: I0904 17:12:04.470703    2482 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock
Sep  4 17:12:04.577926 kubelet[2482]: E0904 17:12:04.577758    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:12:04.587384 kubelet[2482]: I0904 17:12:04.586678    2482 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-chwdn" podStartSLOduration=23.877201336 podCreationTimestamp="2024-09-04 17:11:37 +0000 UTC" firstStartedPulling="2024-09-04 17:11:59.784557146 +0000 UTC m=+43.528815535" lastFinishedPulling="2024-09-04 17:12:03.49399715 +0000 UTC m=+47.238255539" observedRunningTime="2024-09-04 17:12:04.58617582 +0000 UTC m=+48.330434249" watchObservedRunningTime="2024-09-04 17:12:04.58664134 +0000 UTC m=+48.330899729"
Sep  4 17:12:04.613388 sshd[4468]: pam_unix(sshd:session): session closed for user core
Sep  4 17:12:04.621533 systemd[1]: sshd@9-10.0.0.33:22-10.0.0.1:34220.service: Deactivated successfully.
Sep  4 17:12:04.623722 systemd[1]: session-10.scope: Deactivated successfully.
Sep  4 17:12:04.625652 systemd-logind[1410]: Session 10 logged out. Waiting for processes to exit.
Sep  4 17:12:04.634384 systemd[1]: Started sshd@10-10.0.0.33:22-10.0.0.1:34230.service - OpenSSH per-connection server daemon (10.0.0.1:34230).
Sep  4 17:12:04.637856 systemd-logind[1410]: Removed session 10.
Sep  4 17:12:04.668039 sshd[4490]: Accepted publickey for core from 10.0.0.1 port 34230 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA
Sep  4 17:12:04.669520 sshd[4490]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:12:04.673644 systemd-logind[1410]: New session 11 of user core.
Sep  4 17:12:04.680487 systemd[1]: Started session-11.scope - Session 11 of User core.
Sep  4 17:12:05.002234 sshd[4490]: pam_unix(sshd:session): session closed for user core
Sep  4 17:12:05.012764 systemd[1]: sshd@10-10.0.0.33:22-10.0.0.1:34230.service: Deactivated successfully.
Sep  4 17:12:05.014479 systemd[1]: session-11.scope: Deactivated successfully.
Sep  4 17:12:05.016197 systemd-logind[1410]: Session 11 logged out. Waiting for processes to exit.
Sep  4 17:12:05.029423 systemd[1]: Started sshd@11-10.0.0.33:22-10.0.0.1:34236.service - OpenSSH per-connection server daemon (10.0.0.1:34236).
Sep  4 17:12:05.029998 systemd-logind[1410]: Removed session 11.
Sep  4 17:12:05.057847 sshd[4503]: Accepted publickey for core from 10.0.0.1 port 34236 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA
Sep  4 17:12:05.059497 sshd[4503]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:12:05.064421 systemd-logind[1410]: New session 12 of user core.
Sep  4 17:12:05.074497 systemd[1]: Started session-12.scope - Session 12 of User core.
Sep  4 17:12:05.242590 sshd[4503]: pam_unix(sshd:session): session closed for user core
Sep  4 17:12:05.245921 systemd[1]: sshd@11-10.0.0.33:22-10.0.0.1:34236.service: Deactivated successfully.
Sep  4 17:12:05.247828 systemd[1]: session-12.scope: Deactivated successfully.
Sep  4 17:12:05.249841 systemd-logind[1410]: Session 12 logged out. Waiting for processes to exit.
Sep  4 17:12:05.251175 systemd-logind[1410]: Removed session 12.
Sep  4 17:12:05.578759 kubelet[2482]: E0904 17:12:05.578732    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:12:10.258227 systemd[1]: Started sshd@12-10.0.0.33:22-10.0.0.1:34250.service - OpenSSH per-connection server daemon (10.0.0.1:34250).
Sep  4 17:12:10.297377 sshd[4523]: Accepted publickey for core from 10.0.0.1 port 34250 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA
Sep  4 17:12:10.298008 sshd[4523]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:12:10.303953 systemd-logind[1410]: New session 13 of user core.
Sep  4 17:12:10.309571 systemd[1]: Started session-13.scope - Session 13 of User core.
Sep  4 17:12:10.500489 sshd[4523]: pam_unix(sshd:session): session closed for user core
Sep  4 17:12:10.511776 systemd[1]: sshd@12-10.0.0.33:22-10.0.0.1:34250.service: Deactivated successfully.
Sep  4 17:12:10.515740 systemd[1]: session-13.scope: Deactivated successfully.
Sep  4 17:12:10.519688 systemd-logind[1410]: Session 13 logged out. Waiting for processes to exit.
Sep  4 17:12:10.533041 systemd[1]: Started sshd@13-10.0.0.33:22-10.0.0.1:34252.service - OpenSSH per-connection server daemon (10.0.0.1:34252).
Sep  4 17:12:10.535607 systemd-logind[1410]: Removed session 13.
Sep  4 17:12:10.563046 sshd[4538]: Accepted publickey for core from 10.0.0.1 port 34252 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA
Sep  4 17:12:10.564648 sshd[4538]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:12:10.571473 systemd-logind[1410]: New session 14 of user core.
Sep  4 17:12:10.578824 systemd[1]: Started session-14.scope - Session 14 of User core.
Sep  4 17:12:10.918440 sshd[4538]: pam_unix(sshd:session): session closed for user core
Sep  4 17:12:10.926432 systemd[1]: sshd@13-10.0.0.33:22-10.0.0.1:34252.service: Deactivated successfully.
Sep  4 17:12:10.928842 systemd[1]: session-14.scope: Deactivated successfully.
Sep  4 17:12:10.930522 systemd-logind[1410]: Session 14 logged out. Waiting for processes to exit.
Sep  4 17:12:10.942975 systemd[1]: Started sshd@14-10.0.0.33:22-10.0.0.1:34264.service - OpenSSH per-connection server daemon (10.0.0.1:34264).
Sep  4 17:12:10.944899 systemd-logind[1410]: Removed session 14.
Sep  4 17:12:10.976919 sshd[4551]: Accepted publickey for core from 10.0.0.1 port 34264 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA
Sep  4 17:12:10.978403 sshd[4551]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:12:10.982267 systemd-logind[1410]: New session 15 of user core.
Sep  4 17:12:10.988525 systemd[1]: Started session-15.scope - Session 15 of User core.
Sep  4 17:12:11.859688 sshd[4551]: pam_unix(sshd:session): session closed for user core
Sep  4 17:12:11.873024 systemd[1]: sshd@14-10.0.0.33:22-10.0.0.1:34264.service: Deactivated successfully.
Sep  4 17:12:11.878268 systemd[1]: session-15.scope: Deactivated successfully.
Sep  4 17:12:11.879713 systemd-logind[1410]: Session 15 logged out. Waiting for processes to exit.
Sep  4 17:12:11.888786 systemd[1]: Started sshd@15-10.0.0.33:22-10.0.0.1:34268.service - OpenSSH per-connection server daemon (10.0.0.1:34268).
Sep  4 17:12:11.890746 systemd-logind[1410]: Removed session 15.
Sep  4 17:12:11.924979 sshd[4573]: Accepted publickey for core from 10.0.0.1 port 34268 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA
Sep  4 17:12:11.926612 sshd[4573]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:12:11.930394 systemd-logind[1410]: New session 16 of user core.
Sep  4 17:12:11.941554 systemd[1]: Started session-16.scope - Session 16 of User core.
Sep  4 17:12:12.403057 sshd[4573]: pam_unix(sshd:session): session closed for user core
Sep  4 17:12:12.413184 systemd[1]: sshd@15-10.0.0.33:22-10.0.0.1:34268.service: Deactivated successfully.
Sep  4 17:12:12.416160 systemd[1]: session-16.scope: Deactivated successfully.
Sep  4 17:12:12.418275 systemd-logind[1410]: Session 16 logged out. Waiting for processes to exit.
Sep  4 17:12:12.427008 systemd[1]: Started sshd@16-10.0.0.33:22-10.0.0.1:34280.service - OpenSSH per-connection server daemon (10.0.0.1:34280).
Sep  4 17:12:12.429396 systemd-logind[1410]: Removed session 16.
Sep  4 17:12:12.463896 sshd[4593]: Accepted publickey for core from 10.0.0.1 port 34280 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA
Sep  4 17:12:12.465947 sshd[4593]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:12:12.470819 systemd-logind[1410]: New session 17 of user core.
Sep  4 17:12:12.478510 systemd[1]: Started session-17.scope - Session 17 of User core.
Sep  4 17:12:12.646493 sshd[4593]: pam_unix(sshd:session): session closed for user core
Sep  4 17:12:12.649209 systemd[1]: sshd@16-10.0.0.33:22-10.0.0.1:34280.service: Deactivated successfully.
Sep  4 17:12:12.650900 systemd[1]: session-17.scope: Deactivated successfully.
Sep  4 17:12:12.652449 systemd-logind[1410]: Session 17 logged out. Waiting for processes to exit.
Sep  4 17:12:12.653356 systemd-logind[1410]: Removed session 17.
Sep  4 17:12:15.285374 kubelet[2482]: E0904 17:12:15.285337    2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Sep  4 17:12:16.343179 containerd[1429]: time="2024-09-04T17:12:16.342788691Z" level=info msg="StopPodSandbox for \"d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0\""
Sep  4 17:12:16.427414 containerd[1429]: 2024-09-04 17:12:16.391 [WARNING][4650] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--chwdn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ad9729de-5f0a-425d-b5ea-b886ce65bfc9", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2b7403511ee9ff68d1bba48f6af0f524c1d4540149227d10fbf8d5e8a3b747b5", Pod:"csi-node-driver-chwdn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali8161346a695", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Sep  4 17:12:16.427414 containerd[1429]: 2024-09-04 17:12:16.391 [INFO][4650] k8s.go 608: Cleaning up netns ContainerID="d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0"
Sep  4 17:12:16.427414 containerd[1429]: 2024-09-04 17:12:16.391 [INFO][4650] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0" iface="eth0" netns=""
Sep  4 17:12:16.427414 containerd[1429]: 2024-09-04 17:12:16.391 [INFO][4650] k8s.go 615: Releasing IP address(es) ContainerID="d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0"
Sep  4 17:12:16.427414 containerd[1429]: 2024-09-04 17:12:16.391 [INFO][4650] utils.go 188: Calico CNI releasing IP address ContainerID="d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0"
Sep  4 17:12:16.427414 containerd[1429]: 2024-09-04 17:12:16.412 [INFO][4661] ipam_plugin.go 417: Releasing address using handleID ContainerID="d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0" HandleID="k8s-pod-network.d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0" Workload="localhost-k8s-csi--node--driver--chwdn-eth0"
Sep  4 17:12:16.427414 containerd[1429]: 2024-09-04 17:12:16.413 [INFO][4661] ipam_plugin.go 358: About to acquire host-wide IPAM lock.
Sep  4 17:12:16.427414 containerd[1429]: 2024-09-04 17:12:16.413 [INFO][4661] ipam_plugin.go 373: Acquired host-wide IPAM lock.
Sep  4 17:12:16.427414 containerd[1429]: 2024-09-04 17:12:16.421 [WARNING][4661] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0" HandleID="k8s-pod-network.d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0" Workload="localhost-k8s-csi--node--driver--chwdn-eth0"
Sep  4 17:12:16.427414 containerd[1429]: 2024-09-04 17:12:16.421 [INFO][4661] ipam_plugin.go 445: Releasing address using workloadID ContainerID="d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0" HandleID="k8s-pod-network.d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0" Workload="localhost-k8s-csi--node--driver--chwdn-eth0"
Sep  4 17:12:16.427414 containerd[1429]: 2024-09-04 17:12:16.424 [INFO][4661] ipam_plugin.go 379: Released host-wide IPAM lock.
Sep  4 17:12:16.427414 containerd[1429]: 2024-09-04 17:12:16.425 [INFO][4650] k8s.go 621: Teardown processing complete. ContainerID="d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0"
Sep  4 17:12:16.427414 containerd[1429]: time="2024-09-04T17:12:16.427248155Z" level=info msg="TearDown network for sandbox \"d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0\" successfully"
Sep  4 17:12:16.427414 containerd[1429]: time="2024-09-04T17:12:16.427288395Z" level=info msg="StopPodSandbox for \"d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0\" returns successfully"
Sep  4 17:12:16.428493 containerd[1429]: time="2024-09-04T17:12:16.427815476Z" level=info msg="RemovePodSandbox for \"d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0\""
Sep  4 17:12:16.442758 containerd[1429]: time="2024-09-04T17:12:16.430953839Z" level=info msg="Forcibly stopping sandbox \"d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0\""
Sep  4 17:12:16.508423 containerd[1429]: 2024-09-04 17:12:16.476 [WARNING][4684] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--chwdn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ad9729de-5f0a-425d-b5ea-b886ce65bfc9", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2b7403511ee9ff68d1bba48f6af0f524c1d4540149227d10fbf8d5e8a3b747b5", Pod:"csi-node-driver-chwdn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali8161346a695", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Sep  4 17:12:16.508423 containerd[1429]: 2024-09-04 17:12:16.477 [INFO][4684] k8s.go 608: Cleaning up netns ContainerID="d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0"
Sep  4 17:12:16.508423 containerd[1429]: 2024-09-04 17:12:16.477 [INFO][4684] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0" iface="eth0" netns=""
Sep  4 17:12:16.508423 containerd[1429]: 2024-09-04 17:12:16.477 [INFO][4684] k8s.go 615: Releasing IP address(es) ContainerID="d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0"
Sep  4 17:12:16.508423 containerd[1429]: 2024-09-04 17:12:16.477 [INFO][4684] utils.go 188: Calico CNI releasing IP address ContainerID="d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0"
Sep  4 17:12:16.508423 containerd[1429]: 2024-09-04 17:12:16.495 [INFO][4692] ipam_plugin.go 417: Releasing address using handleID ContainerID="d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0" HandleID="k8s-pod-network.d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0" Workload="localhost-k8s-csi--node--driver--chwdn-eth0"
Sep  4 17:12:16.508423 containerd[1429]: 2024-09-04 17:12:16.495 [INFO][4692] ipam_plugin.go 358: About to acquire host-wide IPAM lock.
Sep  4 17:12:16.508423 containerd[1429]: 2024-09-04 17:12:16.496 [INFO][4692] ipam_plugin.go 373: Acquired host-wide IPAM lock.
Sep  4 17:12:16.508423 containerd[1429]: 2024-09-04 17:12:16.504 [WARNING][4692] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0" HandleID="k8s-pod-network.d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0" Workload="localhost-k8s-csi--node--driver--chwdn-eth0"
Sep  4 17:12:16.508423 containerd[1429]: 2024-09-04 17:12:16.504 [INFO][4692] ipam_plugin.go 445: Releasing address using workloadID ContainerID="d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0" HandleID="k8s-pod-network.d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0" Workload="localhost-k8s-csi--node--driver--chwdn-eth0"
Sep  4 17:12:16.508423 containerd[1429]: 2024-09-04 17:12:16.505 [INFO][4692] ipam_plugin.go 379: Released host-wide IPAM lock.
Sep  4 17:12:16.508423 containerd[1429]: 2024-09-04 17:12:16.506 [INFO][4684] k8s.go 621: Teardown processing complete. ContainerID="d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0"
Sep  4 17:12:16.508822 containerd[1429]: time="2024-09-04T17:12:16.508454974Z" level=info msg="TearDown network for sandbox \"d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0\" successfully"
Sep  4 17:12:16.587806 containerd[1429]: time="2024-09-04T17:12:16.587727152Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Sep  4 17:12:16.587924 containerd[1429]: time="2024-09-04T17:12:16.587830552Z" level=info msg="RemovePodSandbox \"d48c7989d348fbc123a5998b0c3813a26f83e658582213cb4344634dc2208ee0\" returns successfully"
Sep  4 17:12:16.588396 containerd[1429]: time="2024-09-04T17:12:16.588361312Z" level=info msg="StopPodSandbox for \"9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598\""
Sep  4 17:12:16.659922 containerd[1429]: 2024-09-04 17:12:16.626 [WARNING][4714] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--kx782-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"952dfd7d-4c90-4dae-9fa9-05a48a9c20ce", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 31, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6a57e999630e5b168a58e72e06e4d4c87d30383d816c8ec5812764f19889fcaf", Pod:"coredns-5dd5756b68-kx782", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84ff708ae6f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Sep  4 17:12:16.659922 containerd[1429]: 2024-09-04 17:12:16.626 [INFO][4714] k8s.go 608: Cleaning up netns ContainerID="9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598"
Sep  4 17:12:16.659922 containerd[1429]: 2024-09-04 17:12:16.626 [INFO][4714] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598" iface="eth0" netns=""
Sep  4 17:12:16.659922 containerd[1429]: 2024-09-04 17:12:16.626 [INFO][4714] k8s.go 615: Releasing IP address(es) ContainerID="9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598"
Sep  4 17:12:16.659922 containerd[1429]: 2024-09-04 17:12:16.626 [INFO][4714] utils.go 188: Calico CNI releasing IP address ContainerID="9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598"
Sep  4 17:12:16.659922 containerd[1429]: 2024-09-04 17:12:16.645 [INFO][4721] ipam_plugin.go 417: Releasing address using handleID ContainerID="9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598" HandleID="k8s-pod-network.9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598" Workload="localhost-k8s-coredns--5dd5756b68--kx782-eth0"
Sep  4 17:12:16.659922 containerd[1429]: 2024-09-04 17:12:16.645 [INFO][4721] ipam_plugin.go 358: About to acquire host-wide IPAM lock.
Sep  4 17:12:16.659922 containerd[1429]: 2024-09-04 17:12:16.645 [INFO][4721] ipam_plugin.go 373: Acquired host-wide IPAM lock.
Sep  4 17:12:16.659922 containerd[1429]: 2024-09-04 17:12:16.654 [WARNING][4721] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598" HandleID="k8s-pod-network.9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598" Workload="localhost-k8s-coredns--5dd5756b68--kx782-eth0"
Sep  4 17:12:16.659922 containerd[1429]: 2024-09-04 17:12:16.654 [INFO][4721] ipam_plugin.go 445: Releasing address using workloadID ContainerID="9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598" HandleID="k8s-pod-network.9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598" Workload="localhost-k8s-coredns--5dd5756b68--kx782-eth0"
Sep  4 17:12:16.659922 containerd[1429]: 2024-09-04 17:12:16.656 [INFO][4721] ipam_plugin.go 379: Released host-wide IPAM lock.
Sep  4 17:12:16.659922 containerd[1429]: 2024-09-04 17:12:16.657 [INFO][4714] k8s.go 621: Teardown processing complete. ContainerID="9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598"
Sep  4 17:12:16.660652 containerd[1429]: time="2024-09-04T17:12:16.659906160Z" level=info msg="TearDown network for sandbox \"9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598\" successfully"
Sep  4 17:12:16.660652 containerd[1429]: time="2024-09-04T17:12:16.659940600Z" level=info msg="StopPodSandbox for \"9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598\" returns successfully"
Sep  4 17:12:16.661524 containerd[1429]: time="2024-09-04T17:12:16.661423082Z" level=info msg="RemovePodSandbox for \"9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598\""
Sep  4 17:12:16.661608 containerd[1429]: time="2024-09-04T17:12:16.661506962Z" level=info msg="Forcibly stopping sandbox \"9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598\""
Sep  4 17:12:16.733646 containerd[1429]: 2024-09-04 17:12:16.696 [WARNING][4743] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--kx782-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"952dfd7d-4c90-4dae-9fa9-05a48a9c20ce", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 31, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6a57e999630e5b168a58e72e06e4d4c87d30383d816c8ec5812764f19889fcaf", Pod:"coredns-5dd5756b68-kx782", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84ff708ae6f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Sep  4 17:12:16.733646 containerd[1429]: 2024-09-04 17:12:16.696 [INFO][4743] k8s.go 608: Cleaning up netns ContainerID="9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598"
Sep  4 17:12:16.733646 containerd[1429]: 2024-09-04 17:12:16.696 [INFO][4743] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598" iface="eth0" netns=""
Sep  4 17:12:16.733646 containerd[1429]: 2024-09-04 17:12:16.696 [INFO][4743] k8s.go 615: Releasing IP address(es) ContainerID="9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598"
Sep  4 17:12:16.733646 containerd[1429]: 2024-09-04 17:12:16.696 [INFO][4743] utils.go 188: Calico CNI releasing IP address ContainerID="9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598"
Sep  4 17:12:16.733646 containerd[1429]: 2024-09-04 17:12:16.720 [INFO][4750] ipam_plugin.go 417: Releasing address using handleID ContainerID="9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598" HandleID="k8s-pod-network.9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598" Workload="localhost-k8s-coredns--5dd5756b68--kx782-eth0"
Sep  4 17:12:16.733646 containerd[1429]: 2024-09-04 17:12:16.720 [INFO][4750] ipam_plugin.go 358: About to acquire host-wide IPAM lock.
Sep  4 17:12:16.733646 containerd[1429]: 2024-09-04 17:12:16.720 [INFO][4750] ipam_plugin.go 373: Acquired host-wide IPAM lock.
Sep  4 17:12:16.733646 containerd[1429]: 2024-09-04 17:12:16.729 [WARNING][4750] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598" HandleID="k8s-pod-network.9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598" Workload="localhost-k8s-coredns--5dd5756b68--kx782-eth0"
Sep  4 17:12:16.733646 containerd[1429]: 2024-09-04 17:12:16.729 [INFO][4750] ipam_plugin.go 445: Releasing address using workloadID ContainerID="9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598" HandleID="k8s-pod-network.9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598" Workload="localhost-k8s-coredns--5dd5756b68--kx782-eth0"
Sep  4 17:12:16.733646 containerd[1429]: 2024-09-04 17:12:16.730 [INFO][4750] ipam_plugin.go 379: Released host-wide IPAM lock.
Sep  4 17:12:16.733646 containerd[1429]: 2024-09-04 17:12:16.732 [INFO][4743] k8s.go 621: Teardown processing complete. ContainerID="9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598"
Sep  4 17:12:16.734072 containerd[1429]: time="2024-09-04T17:12:16.733671090Z" level=info msg="TearDown network for sandbox \"9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598\" successfully"
Sep  4 17:12:16.896403 containerd[1429]: time="2024-09-04T17:12:16.896354610Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Sep  4 17:12:16.896762 containerd[1429]: time="2024-09-04T17:12:16.896436730Z" level=info msg="RemovePodSandbox \"9a4b5d917015699734501b8993d33bc9b9073788b567d1b3d6286fb1eb126598\" returns successfully"
Sep  4 17:12:16.896962 containerd[1429]: time="2024-09-04T17:12:16.896863650Z" level=info msg="StopPodSandbox for \"54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071\""
Sep  4 17:12:16.967047 containerd[1429]: 2024-09-04 17:12:16.931 [WARNING][4772] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78c75d8fb8--b7r8w-eth0", GenerateName:"calico-kube-controllers-78c75d8fb8-", Namespace:"calico-system", SelfLink:"", UID:"a8d222b3-d5db-4dcc-9a51-2bef0d400fc1", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78c75d8fb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"57fa94f553340dfd6a393e01557422ac84eff0f57eb648d72c5c3905ebde2ab4", Pod:"calico-kube-controllers-78c75d8fb8-b7r8w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3f7da7331b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Sep  4 17:12:16.967047 containerd[1429]: 2024-09-04 17:12:16.931 [INFO][4772] k8s.go 608: Cleaning up netns ContainerID="54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071"
Sep  4 17:12:16.967047 containerd[1429]: 2024-09-04 17:12:16.931 [INFO][4772] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071" iface="eth0" netns=""
Sep  4 17:12:16.967047 containerd[1429]: 2024-09-04 17:12:16.931 [INFO][4772] k8s.go 615: Releasing IP address(es) ContainerID="54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071"
Sep  4 17:12:16.967047 containerd[1429]: 2024-09-04 17:12:16.931 [INFO][4772] utils.go 188: Calico CNI releasing IP address ContainerID="54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071"
Sep  4 17:12:16.967047 containerd[1429]: 2024-09-04 17:12:16.953 [INFO][4779] ipam_plugin.go 417: Releasing address using handleID ContainerID="54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071" HandleID="k8s-pod-network.54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071" Workload="localhost-k8s-calico--kube--controllers--78c75d8fb8--b7r8w-eth0"
Sep  4 17:12:16.967047 containerd[1429]: 2024-09-04 17:12:16.953 [INFO][4779] ipam_plugin.go 358: About to acquire host-wide IPAM lock.
Sep  4 17:12:16.967047 containerd[1429]: 2024-09-04 17:12:16.953 [INFO][4779] ipam_plugin.go 373: Acquired host-wide IPAM lock.
Sep  4 17:12:16.967047 containerd[1429]: 2024-09-04 17:12:16.962 [WARNING][4779] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071" HandleID="k8s-pod-network.54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071" Workload="localhost-k8s-calico--kube--controllers--78c75d8fb8--b7r8w-eth0"
Sep  4 17:12:16.967047 containerd[1429]: 2024-09-04 17:12:16.962 [INFO][4779] ipam_plugin.go 445: Releasing address using workloadID ContainerID="54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071" HandleID="k8s-pod-network.54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071" Workload="localhost-k8s-calico--kube--controllers--78c75d8fb8--b7r8w-eth0"
Sep  4 17:12:16.967047 containerd[1429]: 2024-09-04 17:12:16.964 [INFO][4779] ipam_plugin.go 379: Released host-wide IPAM lock.
Sep  4 17:12:16.967047 containerd[1429]: 2024-09-04 17:12:16.965 [INFO][4772] k8s.go 621: Teardown processing complete. ContainerID="54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071"
Sep  4 17:12:16.967448 containerd[1429]: time="2024-09-04T17:12:16.967044496Z" level=info msg="TearDown network for sandbox \"54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071\" successfully"
Sep  4 17:12:16.967448 containerd[1429]: time="2024-09-04T17:12:16.967071936Z" level=info msg="StopPodSandbox for \"54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071\" returns successfully"
Sep  4 17:12:16.969263 containerd[1429]: time="2024-09-04T17:12:16.968295618Z" level=info msg="RemovePodSandbox for \"54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071\""
Sep  4 17:12:16.969263 containerd[1429]: time="2024-09-04T17:12:16.968361978Z" level=info msg="Forcibly stopping sandbox \"54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071\""
Sep  4 17:12:17.041772 containerd[1429]: 2024-09-04 17:12:17.004 [WARNING][4801] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78c75d8fb8--b7r8w-eth0", GenerateName:"calico-kube-controllers-78c75d8fb8-", Namespace:"calico-system", SelfLink:"", UID:"a8d222b3-d5db-4dcc-9a51-2bef0d400fc1", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78c75d8fb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"57fa94f553340dfd6a393e01557422ac84eff0f57eb648d72c5c3905ebde2ab4", Pod:"calico-kube-controllers-78c75d8fb8-b7r8w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3f7da7331b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Sep  4 17:12:17.041772 containerd[1429]: 2024-09-04 17:12:17.005 [INFO][4801] k8s.go 608: Cleaning up netns ContainerID="54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071"
Sep  4 17:12:17.041772 containerd[1429]: 2024-09-04 17:12:17.005 [INFO][4801] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071" iface="eth0" netns=""
Sep  4 17:12:17.041772 containerd[1429]: 2024-09-04 17:12:17.005 [INFO][4801] k8s.go 615: Releasing IP address(es) ContainerID="54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071"
Sep  4 17:12:17.041772 containerd[1429]: 2024-09-04 17:12:17.005 [INFO][4801] utils.go 188: Calico CNI releasing IP address ContainerID="54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071"
Sep  4 17:12:17.041772 containerd[1429]: 2024-09-04 17:12:17.023 [INFO][4809] ipam_plugin.go 417: Releasing address using handleID ContainerID="54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071" HandleID="k8s-pod-network.54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071" Workload="localhost-k8s-calico--kube--controllers--78c75d8fb8--b7r8w-eth0"
Sep  4 17:12:17.041772 containerd[1429]: 2024-09-04 17:12:17.023 [INFO][4809] ipam_plugin.go 358: About to acquire host-wide IPAM lock.
Sep  4 17:12:17.041772 containerd[1429]: 2024-09-04 17:12:17.023 [INFO][4809] ipam_plugin.go 373: Acquired host-wide IPAM lock.
Sep  4 17:12:17.041772 containerd[1429]: 2024-09-04 17:12:17.036 [WARNING][4809] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071" HandleID="k8s-pod-network.54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071" Workload="localhost-k8s-calico--kube--controllers--78c75d8fb8--b7r8w-eth0"
Sep  4 17:12:17.041772 containerd[1429]: 2024-09-04 17:12:17.036 [INFO][4809] ipam_plugin.go 445: Releasing address using workloadID ContainerID="54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071" HandleID="k8s-pod-network.54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071" Workload="localhost-k8s-calico--kube--controllers--78c75d8fb8--b7r8w-eth0"
Sep  4 17:12:17.041772 containerd[1429]: 2024-09-04 17:12:17.038 [INFO][4809] ipam_plugin.go 379: Released host-wide IPAM lock.
Sep  4 17:12:17.041772 containerd[1429]: 2024-09-04 17:12:17.039 [INFO][4801] k8s.go 621: Teardown processing complete. ContainerID="54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071"
Sep  4 17:12:17.042202 containerd[1429]: time="2024-09-04T17:12:17.041812108Z" level=info msg="TearDown network for sandbox \"54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071\" successfully"
Sep  4 17:12:17.054334 containerd[1429]: time="2024-09-04T17:12:17.054240643Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Sep  4 17:12:17.054334 containerd[1429]: time="2024-09-04T17:12:17.054341283Z" level=info msg="RemovePodSandbox \"54828c33418b835dbd4466ba24e871fcca3fbc96e4dc828857854b680bca8071\" returns successfully"
Sep  4 17:12:17.055702 containerd[1429]: time="2024-09-04T17:12:17.055655284Z" level=info msg="StopPodSandbox for \"7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e\""
Sep  4 17:12:17.132806 containerd[1429]: 2024-09-04 17:12:17.098 [WARNING][4832] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--c5ss6-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"7bb12c91-2761-4677-914c-40c5d21a7ccb", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 31, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"34d50c5a8f4bf76419fd782e0fdb29bc0aac9046ca64982bb54bf60dbab03d6d", Pod:"coredns-5dd5756b68-c5ss6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid40c5f4a50c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Sep  4 17:12:17.132806 containerd[1429]: 2024-09-04 17:12:17.098 [INFO][4832] k8s.go 608: Cleaning up netns ContainerID="7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e"
Sep  4 17:12:17.132806 containerd[1429]: 2024-09-04 17:12:17.098 [INFO][4832] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e" iface="eth0" netns=""
Sep  4 17:12:17.132806 containerd[1429]: 2024-09-04 17:12:17.098 [INFO][4832] k8s.go 615: Releasing IP address(es) ContainerID="7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e"
Sep  4 17:12:17.132806 containerd[1429]: 2024-09-04 17:12:17.098 [INFO][4832] utils.go 188: Calico CNI releasing IP address ContainerID="7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e"
Sep  4 17:12:17.132806 containerd[1429]: 2024-09-04 17:12:17.117 [INFO][4839] ipam_plugin.go 417: Releasing address using handleID ContainerID="7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e" HandleID="k8s-pod-network.7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e" Workload="localhost-k8s-coredns--5dd5756b68--c5ss6-eth0"
Sep  4 17:12:17.132806 containerd[1429]: 2024-09-04 17:12:17.117 [INFO][4839] ipam_plugin.go 358: About to acquire host-wide IPAM lock.
Sep  4 17:12:17.132806 containerd[1429]: 2024-09-04 17:12:17.117 [INFO][4839] ipam_plugin.go 373: Acquired host-wide IPAM lock.
Sep  4 17:12:17.132806 containerd[1429]: 2024-09-04 17:12:17.128 [WARNING][4839] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e" HandleID="k8s-pod-network.7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e" Workload="localhost-k8s-coredns--5dd5756b68--c5ss6-eth0"
Sep  4 17:12:17.132806 containerd[1429]: 2024-09-04 17:12:17.128 [INFO][4839] ipam_plugin.go 445: Releasing address using workloadID ContainerID="7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e" HandleID="k8s-pod-network.7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e" Workload="localhost-k8s-coredns--5dd5756b68--c5ss6-eth0"
Sep  4 17:12:17.132806 containerd[1429]: 2024-09-04 17:12:17.129 [INFO][4839] ipam_plugin.go 379: Released host-wide IPAM lock.
Sep  4 17:12:17.132806 containerd[1429]: 2024-09-04 17:12:17.131 [INFO][4832] k8s.go 621: Teardown processing complete. ContainerID="7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e"
Sep  4 17:12:17.133229 containerd[1429]: time="2024-09-04T17:12:17.132823498Z" level=info msg="TearDown network for sandbox \"7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e\" successfully"
Sep  4 17:12:17.133229 containerd[1429]: time="2024-09-04T17:12:17.132855378Z" level=info msg="StopPodSandbox for \"7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e\" returns successfully"
Sep  4 17:12:17.133785 containerd[1429]: time="2024-09-04T17:12:17.133446099Z" level=info msg="RemovePodSandbox for \"7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e\""
Sep  4 17:12:17.133785 containerd[1429]: time="2024-09-04T17:12:17.133489659Z" level=info msg="Forcibly stopping sandbox \"7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e\""
Sep  4 17:12:17.218642 containerd[1429]: 2024-09-04 17:12:17.171 [WARNING][4861] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--c5ss6-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"7bb12c91-2761-4677-914c-40c5d21a7ccb", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 31, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"34d50c5a8f4bf76419fd782e0fdb29bc0aac9046ca64982bb54bf60dbab03d6d", Pod:"coredns-5dd5756b68-c5ss6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid40c5f4a50c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Sep  4 17:12:17.218642 containerd[1429]: 2024-09-04 17:12:17.172 [INFO][4861] k8s.go 608: Cleaning up netns ContainerID="7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e"
Sep  4 17:12:17.218642 containerd[1429]: 2024-09-04 17:12:17.172 [INFO][4861] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e" iface="eth0" netns=""
Sep  4 17:12:17.218642 containerd[1429]: 2024-09-04 17:12:17.172 [INFO][4861] k8s.go 615: Releasing IP address(es) ContainerID="7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e"
Sep  4 17:12:17.218642 containerd[1429]: 2024-09-04 17:12:17.172 [INFO][4861] utils.go 188: Calico CNI releasing IP address ContainerID="7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e"
Sep  4 17:12:17.218642 containerd[1429]: 2024-09-04 17:12:17.201 [INFO][4869] ipam_plugin.go 417: Releasing address using handleID ContainerID="7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e" HandleID="k8s-pod-network.7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e" Workload="localhost-k8s-coredns--5dd5756b68--c5ss6-eth0"
Sep  4 17:12:17.218642 containerd[1429]: 2024-09-04 17:12:17.201 [INFO][4869] ipam_plugin.go 358: About to acquire host-wide IPAM lock.
Sep  4 17:12:17.218642 containerd[1429]: 2024-09-04 17:12:17.201 [INFO][4869] ipam_plugin.go 373: Acquired host-wide IPAM lock.
Sep  4 17:12:17.218642 containerd[1429]: 2024-09-04 17:12:17.210 [WARNING][4869] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e" HandleID="k8s-pod-network.7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e" Workload="localhost-k8s-coredns--5dd5756b68--c5ss6-eth0"
Sep  4 17:12:17.218642 containerd[1429]: 2024-09-04 17:12:17.210 [INFO][4869] ipam_plugin.go 445: Releasing address using workloadID ContainerID="7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e" HandleID="k8s-pod-network.7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e" Workload="localhost-k8s-coredns--5dd5756b68--c5ss6-eth0"
Sep  4 17:12:17.218642 containerd[1429]: 2024-09-04 17:12:17.212 [INFO][4869] ipam_plugin.go 379: Released host-wide IPAM lock.
Sep  4 17:12:17.218642 containerd[1429]: 2024-09-04 17:12:17.215 [INFO][4861] k8s.go 621: Teardown processing complete. ContainerID="7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e"
Sep  4 17:12:17.218642 containerd[1429]: time="2024-09-04T17:12:17.218612763Z" level=info msg="TearDown network for sandbox \"7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e\" successfully"
Sep  4 17:12:17.222008 containerd[1429]: time="2024-09-04T17:12:17.221965007Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Sep  4 17:12:17.222126 containerd[1429]: time="2024-09-04T17:12:17.222047727Z" level=info msg="RemovePodSandbox \"7af1e1eacc747e6f5238a08a48594d3ba79f8b5ff7257fbfd92186543739c47e\" returns successfully"
Sep  4 17:12:17.658291 systemd[1]: Started sshd@17-10.0.0.33:22-10.0.0.1:48728.service - OpenSSH per-connection server daemon (10.0.0.1:48728).
Sep  4 17:12:17.695028 sshd[4897]: Accepted publickey for core from 10.0.0.1 port 48728 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA
Sep  4 17:12:17.696582 sshd[4897]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:12:17.700544 systemd-logind[1410]: New session 18 of user core.
Sep  4 17:12:17.708512 systemd[1]: Started session-18.scope - Session 18 of User core.
Sep  4 17:12:17.831495 sshd[4897]: pam_unix(sshd:session): session closed for user core
Sep  4 17:12:17.836440 systemd[1]: sshd@17-10.0.0.33:22-10.0.0.1:48728.service: Deactivated successfully.
Sep  4 17:12:17.838391 systemd[1]: session-18.scope: Deactivated successfully.
Sep  4 17:12:17.840538 systemd-logind[1410]: Session 18 logged out. Waiting for processes to exit.
Sep  4 17:12:17.841420 systemd-logind[1410]: Removed session 18.
Sep  4 17:12:21.534926 kubelet[2482]: I0904 17:12:21.534875    2482 topology_manager.go:215] "Topology Admit Handler" podUID="f6a00a74-46f1-44e4-bb76-18017f205de4" podNamespace="calico-apiserver" podName="calico-apiserver-7b97449f75-dcv27"
Sep  4 17:12:21.547765 systemd[1]: Created slice kubepods-besteffort-podf6a00a74_46f1_44e4_bb76_18017f205de4.slice - libcontainer container kubepods-besteffort-podf6a00a74_46f1_44e4_bb76_18017f205de4.slice.
Sep  4 17:12:21.683664 kubelet[2482]: I0904 17:12:21.683506    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f6a00a74-46f1-44e4-bb76-18017f205de4-calico-apiserver-certs\") pod \"calico-apiserver-7b97449f75-dcv27\" (UID: \"f6a00a74-46f1-44e4-bb76-18017f205de4\") " pod="calico-apiserver/calico-apiserver-7b97449f75-dcv27"
Sep  4 17:12:21.683664 kubelet[2482]: I0904 17:12:21.683556    2482 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh64m\" (UniqueName: \"kubernetes.io/projected/f6a00a74-46f1-44e4-bb76-18017f205de4-kube-api-access-fh64m\") pod \"calico-apiserver-7b97449f75-dcv27\" (UID: \"f6a00a74-46f1-44e4-bb76-18017f205de4\") " pod="calico-apiserver/calico-apiserver-7b97449f75-dcv27"
Sep  4 17:12:21.784251 kubelet[2482]: E0904 17:12:21.784189    2482 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found
Sep  4 17:12:21.791414 kubelet[2482]: E0904 17:12:21.791086    2482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6a00a74-46f1-44e4-bb76-18017f205de4-calico-apiserver-certs podName:f6a00a74-46f1-44e4-bb76-18017f205de4 nodeName:}" failed. No retries permitted until 2024-09-04 17:12:22.284246847 +0000 UTC m=+66.028505236 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/f6a00a74-46f1-44e4-bb76-18017f205de4-calico-apiserver-certs") pod "calico-apiserver-7b97449f75-dcv27" (UID: "f6a00a74-46f1-44e4-bb76-18017f205de4") : secret "calico-apiserver-certs" not found
Sep  4 17:12:22.451697 containerd[1429]: time="2024-09-04T17:12:22.451271911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b97449f75-dcv27,Uid:f6a00a74-46f1-44e4-bb76-18017f205de4,Namespace:calico-apiserver,Attempt:0,}"
Sep  4 17:12:22.661724 systemd-networkd[1373]: cali7f82cbd11c3: Link UP
Sep  4 17:12:22.661967 systemd-networkd[1373]: cali7f82cbd11c3: Gained carrier
Sep  4 17:12:22.676186 containerd[1429]: 2024-09-04 17:12:22.575 [INFO][4921] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7b97449f75--dcv27-eth0 calico-apiserver-7b97449f75- calico-apiserver  f6a00a74-46f1-44e4-bb76-18017f205de4 1087 0 2024-09-04 17:12:21 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b97449f75 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s  localhost  calico-apiserver-7b97449f75-dcv27 eth0 calico-apiserver [] []   [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7f82cbd11c3  [] []}} ContainerID="32b55b59fb92c42e9af8e6665e2754910cb643f37918b742dcc0e689c8f5d222" Namespace="calico-apiserver" Pod="calico-apiserver-7b97449f75-dcv27" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b97449f75--dcv27-"
Sep  4 17:12:22.676186 containerd[1429]: 2024-09-04 17:12:22.576 [INFO][4921] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="32b55b59fb92c42e9af8e6665e2754910cb643f37918b742dcc0e689c8f5d222" Namespace="calico-apiserver" Pod="calico-apiserver-7b97449f75-dcv27" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b97449f75--dcv27-eth0"
Sep  4 17:12:22.676186 containerd[1429]: 2024-09-04 17:12:22.603 [INFO][4933] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="32b55b59fb92c42e9af8e6665e2754910cb643f37918b742dcc0e689c8f5d222" HandleID="k8s-pod-network.32b55b59fb92c42e9af8e6665e2754910cb643f37918b742dcc0e689c8f5d222" Workload="localhost-k8s-calico--apiserver--7b97449f75--dcv27-eth0"
Sep  4 17:12:22.676186 containerd[1429]: 2024-09-04 17:12:22.616 [INFO][4933] ipam_plugin.go 270: Auto assigning IP ContainerID="32b55b59fb92c42e9af8e6665e2754910cb643f37918b742dcc0e689c8f5d222" HandleID="k8s-pod-network.32b55b59fb92c42e9af8e6665e2754910cb643f37918b742dcc0e689c8f5d222" Workload="localhost-k8s-calico--apiserver--7b97449f75--dcv27-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400062f290), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7b97449f75-dcv27", "timestamp":"2024-09-04 17:12:22.603634703 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Sep  4 17:12:22.676186 containerd[1429]: 2024-09-04 17:12:22.617 [INFO][4933] ipam_plugin.go 358: About to acquire host-wide IPAM lock.
Sep  4 17:12:22.676186 containerd[1429]: 2024-09-04 17:12:22.617 [INFO][4933] ipam_plugin.go 373: Acquired host-wide IPAM lock.
Sep  4 17:12:22.676186 containerd[1429]: 2024-09-04 17:12:22.617 [INFO][4933] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost'
Sep  4 17:12:22.676186 containerd[1429]: 2024-09-04 17:12:22.619 [INFO][4933] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.32b55b59fb92c42e9af8e6665e2754910cb643f37918b742dcc0e689c8f5d222" host="localhost"
Sep  4 17:12:22.676186 containerd[1429]: 2024-09-04 17:12:22.624 [INFO][4933] ipam.go 372: Looking up existing affinities for host host="localhost"
Sep  4 17:12:22.676186 containerd[1429]: 2024-09-04 17:12:22.633 [INFO][4933] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost"
Sep  4 17:12:22.676186 containerd[1429]: 2024-09-04 17:12:22.635 [INFO][4933] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost"
Sep  4 17:12:22.676186 containerd[1429]: 2024-09-04 17:12:22.640 [INFO][4933] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost"
Sep  4 17:12:22.676186 containerd[1429]: 2024-09-04 17:12:22.640 [INFO][4933] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.32b55b59fb92c42e9af8e6665e2754910cb643f37918b742dcc0e689c8f5d222" host="localhost"
Sep  4 17:12:22.676186 containerd[1429]: 2024-09-04 17:12:22.641 [INFO][4933] ipam.go 1685: Creating new handle: k8s-pod-network.32b55b59fb92c42e9af8e6665e2754910cb643f37918b742dcc0e689c8f5d222
Sep  4 17:12:22.676186 containerd[1429]: 2024-09-04 17:12:22.645 [INFO][4933] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.32b55b59fb92c42e9af8e6665e2754910cb643f37918b742dcc0e689c8f5d222" host="localhost"
Sep  4 17:12:22.676186 containerd[1429]: 2024-09-04 17:12:22.653 [INFO][4933] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.32b55b59fb92c42e9af8e6665e2754910cb643f37918b742dcc0e689c8f5d222" host="localhost"
Sep  4 17:12:22.676186 containerd[1429]: 2024-09-04 17:12:22.653 [INFO][4933] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.32b55b59fb92c42e9af8e6665e2754910cb643f37918b742dcc0e689c8f5d222" host="localhost"
Sep  4 17:12:22.676186 containerd[1429]: 2024-09-04 17:12:22.653 [INFO][4933] ipam_plugin.go 379: Released host-wide IPAM lock.
Sep  4 17:12:22.676186 containerd[1429]: 2024-09-04 17:12:22.653 [INFO][4933] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="32b55b59fb92c42e9af8e6665e2754910cb643f37918b742dcc0e689c8f5d222" HandleID="k8s-pod-network.32b55b59fb92c42e9af8e6665e2754910cb643f37918b742dcc0e689c8f5d222" Workload="localhost-k8s-calico--apiserver--7b97449f75--dcv27-eth0"
Sep  4 17:12:22.676929 containerd[1429]: 2024-09-04 17:12:22.656 [INFO][4921] k8s.go 386: Populated endpoint ContainerID="32b55b59fb92c42e9af8e6665e2754910cb643f37918b742dcc0e689c8f5d222" Namespace="calico-apiserver" Pod="calico-apiserver-7b97449f75-dcv27" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b97449f75--dcv27-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b97449f75--dcv27-eth0", GenerateName:"calico-apiserver-7b97449f75-", Namespace:"calico-apiserver", SelfLink:"", UID:"f6a00a74-46f1-44e4-bb76-18017f205de4", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 21, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b97449f75", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7b97449f75-dcv27", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7f82cbd11c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Sep  4 17:12:22.676929 containerd[1429]: 2024-09-04 17:12:22.656 [INFO][4921] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="32b55b59fb92c42e9af8e6665e2754910cb643f37918b742dcc0e689c8f5d222" Namespace="calico-apiserver" Pod="calico-apiserver-7b97449f75-dcv27" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b97449f75--dcv27-eth0"
Sep  4 17:12:22.676929 containerd[1429]: 2024-09-04 17:12:22.656 [INFO][4921] dataplane_linux.go 68: Setting the host side veth name to cali7f82cbd11c3 ContainerID="32b55b59fb92c42e9af8e6665e2754910cb643f37918b742dcc0e689c8f5d222" Namespace="calico-apiserver" Pod="calico-apiserver-7b97449f75-dcv27" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b97449f75--dcv27-eth0"
Sep  4 17:12:22.676929 containerd[1429]: 2024-09-04 17:12:22.660 [INFO][4921] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="32b55b59fb92c42e9af8e6665e2754910cb643f37918b742dcc0e689c8f5d222" Namespace="calico-apiserver" Pod="calico-apiserver-7b97449f75-dcv27" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b97449f75--dcv27-eth0"
Sep  4 17:12:22.676929 containerd[1429]: 2024-09-04 17:12:22.660 [INFO][4921] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="32b55b59fb92c42e9af8e6665e2754910cb643f37918b742dcc0e689c8f5d222" Namespace="calico-apiserver" Pod="calico-apiserver-7b97449f75-dcv27" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b97449f75--dcv27-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b97449f75--dcv27-eth0", GenerateName:"calico-apiserver-7b97449f75-", Namespace:"calico-apiserver", SelfLink:"", UID:"f6a00a74-46f1-44e4-bb76-18017f205de4", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 21, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b97449f75", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"32b55b59fb92c42e9af8e6665e2754910cb643f37918b742dcc0e689c8f5d222", Pod:"calico-apiserver-7b97449f75-dcv27", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7f82cbd11c3", MAC:"06:8d:99:3b:b8:80", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Sep  4 17:12:22.676929 containerd[1429]: 2024-09-04 17:12:22.672 [INFO][4921] k8s.go 500: Wrote updated endpoint to datastore ContainerID="32b55b59fb92c42e9af8e6665e2754910cb643f37918b742dcc0e689c8f5d222" Namespace="calico-apiserver" Pod="calico-apiserver-7b97449f75-dcv27" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b97449f75--dcv27-eth0"
Sep  4 17:12:22.703724 containerd[1429]: time="2024-09-04T17:12:22.702929099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep  4 17:12:22.703724 containerd[1429]: time="2024-09-04T17:12:22.703011740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:12:22.703724 containerd[1429]: time="2024-09-04T17:12:22.703026901Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep  4 17:12:22.703724 containerd[1429]: time="2024-09-04T17:12:22.703036861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep  4 17:12:22.725528 systemd[1]: Started cri-containerd-32b55b59fb92c42e9af8e6665e2754910cb643f37918b742dcc0e689c8f5d222.scope - libcontainer container 32b55b59fb92c42e9af8e6665e2754910cb643f37918b742dcc0e689c8f5d222.
Sep  4 17:12:22.743991 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Sep  4 17:12:22.763752 containerd[1429]: time="2024-09-04T17:12:22.763711069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b97449f75-dcv27,Uid:f6a00a74-46f1-44e4-bb76-18017f205de4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"32b55b59fb92c42e9af8e6665e2754910cb643f37918b742dcc0e689c8f5d222\""
Sep  4 17:12:22.767753 containerd[1429]: time="2024-09-04T17:12:22.767711538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\""
Sep  4 17:12:22.858088 systemd[1]: Started sshd@18-10.0.0.33:22-10.0.0.1:35850.service - OpenSSH per-connection server daemon (10.0.0.1:35850).
Sep  4 17:12:22.894810 sshd[4999]: Accepted publickey for core from 10.0.0.1 port 35850 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA
Sep  4 17:12:22.896389 sshd[4999]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:12:22.901412 systemd-logind[1410]: New session 19 of user core.
Sep  4 17:12:22.907584 systemd[1]: Started session-19.scope - Session 19 of User core.
Sep  4 17:12:23.072337 sshd[4999]: pam_unix(sshd:session): session closed for user core
Sep  4 17:12:23.079542 systemd[1]: sshd@18-10.0.0.33:22-10.0.0.1:35850.service: Deactivated successfully.
Sep  4 17:12:23.083262 systemd[1]: session-19.scope: Deactivated successfully.
Sep  4 17:12:23.084719 systemd-logind[1410]: Session 19 logged out. Waiting for processes to exit.
Sep  4 17:12:23.085713 systemd-logind[1410]: Removed session 19.
Sep  4 17:12:24.281251 systemd-networkd[1373]: cali7f82cbd11c3: Gained IPv6LL
Sep  4 17:12:24.654411 containerd[1429]: time="2024-09-04T17:12:24.654357559Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:12:24.667497 containerd[1429]: time="2024-09-04T17:12:24.667447814Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=37849884"
Sep  4 17:12:24.680723 containerd[1429]: time="2024-09-04T17:12:24.680679511Z" level=info msg="ImageCreate event name:\"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:12:24.695388 containerd[1429]: time="2024-09-04T17:12:24.695336711Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Sep  4 17:12:24.696295 containerd[1429]: time="2024-09-04T17:12:24.696254086Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"39217419\" in 1.928490867s"
Sep  4 17:12:24.696295 containerd[1429]: time="2024-09-04T17:12:24.696293567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\""
Sep  4 17:12:24.699250 containerd[1429]: time="2024-09-04T17:12:24.699204535Z" level=info msg="CreateContainer within sandbox \"32b55b59fb92c42e9af8e6665e2754910cb643f37918b742dcc0e689c8f5d222\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}"
Sep  4 17:12:24.805258 containerd[1429]: time="2024-09-04T17:12:24.805210513Z" level=info msg="CreateContainer within sandbox \"32b55b59fb92c42e9af8e6665e2754910cb643f37918b742dcc0e689c8f5d222\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2bd31d9a1454a9b27c9d953a6fa1aac006e201f78c7c229794cd785ba9b1c1c1\""
Sep  4 17:12:24.806027 containerd[1429]: time="2024-09-04T17:12:24.805988446Z" level=info msg="StartContainer for \"2bd31d9a1454a9b27c9d953a6fa1aac006e201f78c7c229794cd785ba9b1c1c1\""
Sep  4 17:12:24.850530 systemd[1]: Started cri-containerd-2bd31d9a1454a9b27c9d953a6fa1aac006e201f78c7c229794cd785ba9b1c1c1.scope - libcontainer container 2bd31d9a1454a9b27c9d953a6fa1aac006e201f78c7c229794cd785ba9b1c1c1.
Sep  4 17:12:24.947715 containerd[1429]: time="2024-09-04T17:12:24.947360965Z" level=info msg="StartContainer for \"2bd31d9a1454a9b27c9d953a6fa1aac006e201f78c7c229794cd785ba9b1c1c1\" returns successfully"
Sep  4 17:12:25.654501 kubelet[2482]: I0904 17:12:25.654423    2482 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7b97449f75-dcv27" podStartSLOduration=2.7249809689999998 podCreationTimestamp="2024-09-04 17:12:21 +0000 UTC" firstStartedPulling="2024-09-04 17:12:22.76724313 +0000 UTC m=+66.511501479" lastFinishedPulling="2024-09-04 17:12:24.696637332 +0000 UTC m=+68.440895721" observedRunningTime="2024-09-04 17:12:25.653515917 +0000 UTC m=+69.397774306" watchObservedRunningTime="2024-09-04 17:12:25.654375211 +0000 UTC m=+69.398633640"
Sep  4 17:12:28.097834 systemd[1]: Started sshd@19-10.0.0.33:22-10.0.0.1:35860.service - OpenSSH per-connection server daemon (10.0.0.1:35860).
Sep  4 17:12:28.151361 sshd[5074]: Accepted publickey for core from 10.0.0.1 port 35860 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA
Sep  4 17:12:28.153783 sshd[5074]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Sep  4 17:12:28.161282 systemd-logind[1410]: New session 20 of user core.
Sep  4 17:12:28.169611 systemd[1]: Started session-20.scope - Session 20 of User core.
Sep  4 17:12:28.412130 sshd[5074]: pam_unix(sshd:session): session closed for user core
Sep  4 17:12:28.417271 systemd[1]: sshd@19-10.0.0.33:22-10.0.0.1:35860.service: Deactivated successfully.
Sep  4 17:12:28.420262 systemd[1]: session-20.scope: Deactivated successfully.
Sep  4 17:12:28.421123 systemd-logind[1410]: Session 20 logged out. Waiting for processes to exit.
Sep  4 17:12:28.422376 systemd-logind[1410]: Removed session 20.