Jul 2 00:48:10.827996 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 2 00:48:10.828014 kernel: Linux version 6.1.96-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20230826 p7) 13.2.1 20230826, GNU ld (Gentoo 2.40 p5) 2.40.0) #1 SMP PREEMPT Mon Jul 1 23:26:07 -00 2024 Jul 2 00:48:10.828022 kernel: efi: EFI v2.70 by EDK II Jul 2 00:48:10.828028 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9210018 MEMRESERVE=0xd9523d18 Jul 2 00:48:10.828032 kernel: random: crng init done Jul 2 00:48:10.828037 kernel: ACPI: Early table checksum verification disabled Jul 2 00:48:10.828044 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Jul 2 00:48:10.828050 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 2 00:48:10.828056 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:48:10.828061 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:48:10.828066 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:48:10.828071 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:48:10.828076 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:48:10.828082 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:48:10.828090 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:48:10.828095 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:48:10.828101 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:48:10.828107 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 2 00:48:10.828112 kernel: NUMA: Failed to initialise from firmware Jul 2 00:48:10.828118 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 00:48:10.828124 kernel: NUMA: NODE_DATA [mem 0xdcb07800-0xdcb0cfff] Jul 2 00:48:10.828129 kernel: Zone ranges: Jul 2 00:48:10.828135 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 00:48:10.828142 kernel: DMA32 empty Jul 2 00:48:10.828147 kernel: Normal empty Jul 2 00:48:10.828153 kernel: Movable zone start for each node Jul 2 00:48:10.828158 kernel: Early memory node ranges Jul 2 00:48:10.828164 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Jul 2 00:48:10.828170 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Jul 2 00:48:10.828175 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Jul 2 00:48:10.828181 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Jul 2 00:48:10.828187 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Jul 2 00:48:10.828192 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Jul 2 00:48:10.828198 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Jul 2 00:48:10.828204 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 00:48:10.828210 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 2 00:48:10.828216 kernel: psci: probing for conduit method from ACPI. Jul 2 00:48:10.828222 kernel: psci: PSCIv1.1 detected in firmware. Jul 2 00:48:10.828227 kernel: psci: Using standard PSCI v0.2 function IDs Jul 2 00:48:10.828233 kernel: psci: Trusted OS migration not required Jul 2 00:48:10.828257 kernel: psci: SMC Calling Convention v1.1 Jul 2 00:48:10.828263 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 2 00:48:10.828271 kernel: percpu: Embedded 30 pages/cpu s83880 r8192 d30808 u122880 Jul 2 00:48:10.828277 kernel: pcpu-alloc: s83880 r8192 d30808 u122880 alloc=30*4096 Jul 2 00:48:10.828283 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 2 00:48:10.828289 kernel: Detected PIPT I-cache on CPU0 Jul 2 00:48:10.828295 kernel: CPU features: detected: GIC system register CPU interface Jul 2 00:48:10.828301 kernel: CPU features: detected: Hardware dirty bit management Jul 2 00:48:10.828306 kernel: CPU features: detected: Spectre-v4 Jul 2 00:48:10.828313 kernel: CPU features: detected: Spectre-BHB Jul 2 00:48:10.828335 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 2 00:48:10.828342 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 2 00:48:10.828348 kernel: CPU features: detected: ARM erratum 1418040 Jul 2 00:48:10.828354 kernel: alternatives: applying boot alternatives Jul 2 00:48:10.828361 kernel: Fallback order for Node 0: 0 Jul 2 00:48:10.828376 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 2 00:48:10.828382 kernel: Policy zone: DMA Jul 2 00:48:10.828389 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=737253d2193edddafc0d4fc7ff920b3d1af9a8074ff8ebc3150e355b81fc53aa Jul 2 00:48:10.828396 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 00:48:10.828402 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 00:48:10.828408 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 00:48:10.828414 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 00:48:10.828422 kernel: Memory: 2458544K/2572288K available (9984K kernel code, 2108K rwdata, 7720K rodata, 34688K init, 894K bss, 113744K reserved, 0K cma-reserved) Jul 2 00:48:10.828428 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 2 00:48:10.828434 kernel: trace event string verifier disabled Jul 2 00:48:10.828440 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 00:48:10.828447 kernel: rcu: RCU event tracing is enabled. Jul 2 00:48:10.828453 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 2 00:48:10.828459 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 00:48:10.828465 kernel: Tracing variant of Tasks RCU enabled. Jul 2 00:48:10.828471 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 00:48:10.828477 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 2 00:48:10.828483 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 2 00:48:10.828489 kernel: GICv3: 256 SPIs implemented Jul 2 00:48:10.828497 kernel: GICv3: 0 Extended SPIs implemented Jul 2 00:48:10.828502 kernel: Root IRQ handler: gic_handle_irq Jul 2 00:48:10.828508 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 2 00:48:10.828514 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 2 00:48:10.828520 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 2 00:48:10.828526 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jul 2 00:48:10.828532 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jul 2 00:48:10.828538 kernel: GICv3: using LPI property table @0x00000000400e0000 Jul 2 00:48:10.828544 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400f0000 Jul 2 00:48:10.828550 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 00:48:10.828556 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 00:48:10.828563 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 2 00:48:10.828569 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 2 00:48:10.828575 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 2 00:48:10.828581 kernel: arm-pv: using stolen time PV Jul 2 00:48:10.828588 kernel: Console: colour dummy device 80x25 Jul 2 00:48:10.828594 kernel: ACPI: Core revision 20220331 Jul 2 00:48:10.828600 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 2 00:48:10.828606 kernel: pid_max: default: 32768 minimum: 301 Jul 2 00:48:10.828612 kernel: LSM: Security Framework initializing Jul 2 00:48:10.828618 kernel: SELinux: Initializing. Jul 2 00:48:10.828626 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:48:10.828632 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:48:10.828638 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jul 2 00:48:10.828644 kernel: cblist_init_generic: Setting shift to 2 and lim to 1. Jul 2 00:48:10.828650 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jul 2 00:48:10.828656 kernel: cblist_init_generic: Setting shift to 2 and lim to 1. Jul 2 00:48:10.828662 kernel: rcu: Hierarchical SRCU implementation. Jul 2 00:48:10.828668 kernel: rcu: Max phase no-delay instances is 400. Jul 2 00:48:10.828675 kernel: Platform MSI: ITS@0x8080000 domain created Jul 2 00:48:10.828682 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 2 00:48:10.828688 kernel: Remapping and enabling EFI services. Jul 2 00:48:10.828694 kernel: smp: Bringing up secondary CPUs ... Jul 2 00:48:10.828700 kernel: Detected PIPT I-cache on CPU1 Jul 2 00:48:10.828706 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 2 00:48:10.828712 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040100000 Jul 2 00:48:10.828718 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 00:48:10.828725 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 2 00:48:10.828731 kernel: Detected PIPT I-cache on CPU2 Jul 2 00:48:10.828737 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 2 00:48:10.828744 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040110000 Jul 2 00:48:10.828751 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 00:48:10.828757 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 2 00:48:10.828763 kernel: Detected PIPT I-cache on CPU3 Jul 2 00:48:10.828773 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 2 00:48:10.828781 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040120000 Jul 2 00:48:10.828788 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 00:48:10.828794 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 2 00:48:10.828800 kernel: smp: Brought up 1 node, 4 CPUs Jul 2 00:48:10.828807 kernel: SMP: Total of 4 processors activated. Jul 2 00:48:10.828813 kernel: CPU features: detected: 32-bit EL0 Support Jul 2 00:48:10.828821 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 2 00:48:10.828828 kernel: CPU features: detected: Common not Private translations Jul 2 00:48:10.828834 kernel: CPU features: detected: CRC32 instructions Jul 2 00:48:10.828841 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 2 00:48:10.828847 kernel: CPU features: detected: LSE atomic instructions Jul 2 00:48:10.828853 kernel: CPU features: detected: Privileged Access Never Jul 2 00:48:10.828861 kernel: CPU features: detected: RAS Extension Support Jul 2 00:48:10.828868 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 2 00:48:10.828874 kernel: CPU: All CPU(s) started at EL1 Jul 2 00:48:10.828881 kernel: alternatives: applying system-wide alternatives Jul 2 00:48:10.828887 kernel: devtmpfs: initialized Jul 2 00:48:10.828894 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 00:48:10.828900 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 2 00:48:10.828907 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 00:48:10.828913 kernel: SMBIOS 3.0.0 present. Jul 2 00:48:10.828921 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Jul 2 00:48:10.828927 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 00:48:10.828934 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 2 00:48:10.828945 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 2 00:48:10.828952 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 2 00:48:10.828958 kernel: audit: initializing netlink subsys (disabled) Jul 2 00:48:10.828965 kernel: audit: type=2000 audit(0.030:1): state=initialized audit_enabled=0 res=1 Jul 2 00:48:10.828971 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 00:48:10.828978 kernel: cpuidle: using governor menu Jul 2 00:48:10.828986 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 2 00:48:10.828992 kernel: ASID allocator initialised with 32768 entries Jul 2 00:48:10.828998 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 00:48:10.829005 kernel: Serial: AMBA PL011 UART driver Jul 2 00:48:10.829011 kernel: KASLR enabled Jul 2 00:48:10.829017 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 00:48:10.829024 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 00:48:10.829030 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 2 00:48:10.829036 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 2 00:48:10.829044 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 00:48:10.829051 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 00:48:10.829057 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 2 00:48:10.829063 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 2 00:48:10.829070 kernel: ACPI: Added _OSI(Module Device) Jul 2 00:48:10.829076 kernel: ACPI: Added _OSI(Processor Device) Jul 2 00:48:10.829082 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 00:48:10.829089 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 00:48:10.829095 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 00:48:10.829102 kernel: ACPI: Interpreter enabled Jul 2 00:48:10.829109 kernel: ACPI: Using GIC for interrupt routing Jul 2 00:48:10.829115 kernel: ACPI: MCFG table detected, 1 entries Jul 2 00:48:10.829122 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 2 00:48:10.829128 kernel: printk: console [ttyAMA0] enabled Jul 2 00:48:10.829135 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 00:48:10.829293 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 00:48:10.829358 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 2 00:48:10.829419 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 2 00:48:10.829477 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 2 00:48:10.829532 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 2 00:48:10.829541 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 2 00:48:10.829547 kernel: PCI host bridge to bus 0000:00 Jul 2 00:48:10.829609 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 2 00:48:10.829684 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 2 00:48:10.829740 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 2 00:48:10.829790 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 00:48:10.829863 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 2 00:48:10.829933 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 00:48:10.830002 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 2 00:48:10.830061 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 2 00:48:10.830119 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 2 00:48:10.830179 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 2 00:48:10.830247 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 2 00:48:10.830310 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 2 00:48:10.830366 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 2 00:48:10.830418 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 2 00:48:10.830472 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 2 00:48:10.830480 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 2 00:48:10.830489 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 2 00:48:10.830496 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 2 00:48:10.830503 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 2 00:48:10.830509 kernel: iommu: Default domain type: Translated Jul 2 00:48:10.830516 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 2 00:48:10.830522 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 00:48:10.830529 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 00:48:10.830536 kernel: PTP clock support registered Jul 2 00:48:10.830542 kernel: Registered efivars operations Jul 2 00:48:10.830550 kernel: vgaarb: loaded Jul 2 00:48:10.830557 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 2 00:48:10.830563 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 00:48:10.830570 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 00:48:10.830577 kernel: pnp: PnP ACPI init Jul 2 00:48:10.830641 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 2 00:48:10.830651 kernel: pnp: PnP ACPI: found 1 devices Jul 2 00:48:10.830657 kernel: NET: Registered PF_INET protocol family Jul 2 00:48:10.830666 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 00:48:10.830672 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 00:48:10.830679 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 00:48:10.830686 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 00:48:10.830692 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 2 00:48:10.830699 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 00:48:10.830706 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:48:10.830713 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:48:10.830719 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 00:48:10.830727 kernel: PCI: CLS 0 bytes, default 64 Jul 2 00:48:10.830734 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 2 00:48:10.830740 kernel: kvm [1]: HYP mode not available Jul 2 00:48:10.830747 kernel: Initialise system trusted keyrings Jul 2 00:48:10.830753 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 00:48:10.830760 kernel: Key type asymmetric registered Jul 2 00:48:10.830767 kernel: Asymmetric key parser 'x509' registered Jul 2 00:48:10.830773 kernel: alg: self-tests for CTR-KDF (hmac(sha256)) passed Jul 2 00:48:10.830780 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 00:48:10.830788 kernel: io scheduler mq-deadline registered Jul 2 00:48:10.830794 kernel: io scheduler kyber registered Jul 2 00:48:10.830801 kernel: io scheduler bfq registered Jul 2 00:48:10.830807 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 2 00:48:10.830814 kernel: ACPI: button: Power Button [PWRB] Jul 2 00:48:10.830821 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 2 00:48:10.830881 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 2 00:48:10.830890 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 00:48:10.830896 kernel: thunder_xcv, ver 1.0 Jul 2 00:48:10.830905 kernel: thunder_bgx, ver 1.0 Jul 2 00:48:10.830911 kernel: nicpf, ver 1.0 Jul 2 00:48:10.830918 kernel: nicvf, ver 1.0 Jul 2 00:48:10.830991 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 2 00:48:10.831049 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-07-02T00:48:10 UTC (1719881290) Jul 2 00:48:10.831058 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 00:48:10.831064 kernel: NET: Registered PF_INET6 protocol family Jul 2 00:48:10.831071 kernel: Segment Routing with IPv6 Jul 2 00:48:10.831080 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 00:48:10.831087 kernel: NET: Registered PF_PACKET protocol family Jul 2 00:48:10.831094 kernel: Key type dns_resolver registered Jul 2 00:48:10.831100 kernel: registered taskstats version 1 Jul 2 00:48:10.831107 kernel: Loading compiled-in X.509 certificates Jul 2 00:48:10.831114 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.1.96-flatcar: 8c08f3036bf5312c828f80bd26cdb320494d46fc' Jul 2 00:48:10.831120 kernel: Key type .fscrypt registered Jul 2 00:48:10.831127 kernel: Key type fscrypt-provisioning registered Jul 2 00:48:10.831133 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 00:48:10.831141 kernel: ima: Allocated hash algorithm: sha1 Jul 2 00:48:10.831148 kernel: ima: No architecture policies found Jul 2 00:48:10.831154 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 2 00:48:10.831161 kernel: clk: Disabling unused clocks Jul 2 00:48:10.831168 kernel: Freeing unused kernel memory: 34688K Jul 2 00:48:10.831174 kernel: Run /init as init process Jul 2 00:48:10.831181 kernel: with arguments: Jul 2 00:48:10.831187 kernel: /init Jul 2 00:48:10.831193 kernel: with environment: Jul 2 00:48:10.831201 kernel: HOME=/ Jul 2 00:48:10.831207 kernel: TERM=linux Jul 2 00:48:10.831214 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 00:48:10.831222 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 00:48:10.831231 systemd[1]: Detected virtualization kvm. Jul 2 00:48:10.831238 systemd[1]: Detected architecture arm64. Jul 2 00:48:10.831253 systemd[1]: Running in initrd. Jul 2 00:48:10.831260 systemd[1]: No hostname configured, using default hostname. Jul 2 00:48:10.831269 systemd[1]: Hostname set to . Jul 2 00:48:10.831276 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:48:10.831283 systemd[1]: Queued start job for default target initrd.target. Jul 2 00:48:10.831291 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:48:10.831298 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:48:10.831305 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:48:10.831312 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:48:10.831319 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:48:10.831327 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:48:10.831335 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:48:10.831342 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:48:10.831349 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jul 2 00:48:10.831356 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 00:48:10.831363 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 00:48:10.831370 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:48:10.831379 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:48:10.831386 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:48:10.831393 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:48:10.831400 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:48:10.831407 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 00:48:10.831414 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 00:48:10.831421 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:48:10.831428 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:48:10.831435 systemd[1]: Starting systemd-vconsole-setup.service - Setup Virtual Console... Jul 2 00:48:10.831444 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:48:10.831451 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 00:48:10.831458 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:48:10.831465 systemd[1]: Finished systemd-vconsole-setup.service - Setup Virtual Console. Jul 2 00:48:10.831472 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:48:10.831480 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:48:10.831487 kernel: audit: type=1130 audit(1719881290.828:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:10.831497 systemd-journald[224]: Journal started Jul 2 00:48:10.831536 systemd-journald[224]: Runtime Journal (/run/log/journal/92b47a689c40483281c81e91b180264f) is 6.0M, max 48.6M, 42.6M free. Jul 2 00:48:10.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:10.822524 systemd-modules-load[226]: Inserted module 'overlay' Jul 2 00:48:10.833533 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:48:10.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:10.836286 kernel: audit: type=1130 audit(1719881290.833:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:10.840419 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:48:10.843655 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 00:48:10.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:10.844851 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:48:10.855424 kernel: audit: type=1130 audit(1719881290.845:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:10.855448 kernel: Bridge firewalling registered Jul 2 00:48:10.846674 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 00:48:10.847679 systemd-modules-load[226]: Inserted module 'br_netfilter' Jul 2 00:48:10.857317 dracut-cmdline[246]: dracut-dracut-053 Jul 2 00:48:10.871277 kernel: audit: type=1130 audit(1719881290.858:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:10.871303 kernel: audit: type=1334 audit(1719881290.861:6): prog-id=6 op=LOAD Jul 2 00:48:10.871312 kernel: SCSI subsystem initialized Jul 2 00:48:10.871320 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 00:48:10.871335 kernel: device-mapper: uevent: version 1.0.3 Jul 2 00:48:10.871344 kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Jul 2 00:48:10.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:10.861000 audit: BPF prog-id=6 op=LOAD Jul 2 00:48:10.871416 dracut-cmdline[246]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=737253d2193edddafc0d4fc7ff920b3d1af9a8074ff8ebc3150e355b81fc53aa Jul 2 00:48:10.857620 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:48:10.869513 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:48:10.877614 systemd-modules-load[226]: Inserted module 'dm_multipath' Jul 2 00:48:10.879148 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:48:10.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:10.880878 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:48:10.884322 kernel: audit: type=1130 audit(1719881290.879:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:10.890342 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:48:10.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:10.896279 kernel: audit: type=1130 audit(1719881290.890:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:10.901068 systemd-resolved[256]: Positive Trust Anchors: Jul 2 00:48:10.901084 systemd-resolved[256]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:48:10.901112 systemd-resolved[256]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 00:48:10.905701 systemd-resolved[256]: Defaulting to hostname 'linux'. Jul 2 00:48:10.910282 kernel: audit: type=1130 audit(1719881290.907:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:10.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:10.906568 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:48:10.907772 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:48:10.942269 kernel: Loading iSCSI transport class v2.0-870. Jul 2 00:48:10.950264 kernel: iscsi: registered transport (tcp) Jul 2 00:48:10.963415 kernel: iscsi: registered transport (qla4xxx) Jul 2 00:48:10.963436 kernel: QLogic iSCSI HBA Driver Jul 2 00:48:11.006793 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 00:48:11.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:11.010270 kernel: audit: type=1130 audit(1719881291.007:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:11.015440 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 00:48:11.067280 kernel: raid6: neonx8 gen() 15730 MB/s Jul 2 00:48:11.084272 kernel: raid6: neonx4 gen() 15634 MB/s Jul 2 00:48:11.101256 kernel: raid6: neonx2 gen() 13196 MB/s Jul 2 00:48:11.118257 kernel: raid6: neonx1 gen() 10460 MB/s Jul 2 00:48:11.135264 kernel: raid6: int64x8 gen() 5219 MB/s Jul 2 00:48:11.152269 kernel: raid6: int64x4 gen() 7311 MB/s Jul 2 00:48:11.169269 kernel: raid6: int64x2 gen() 6125 MB/s Jul 2 00:48:11.186262 kernel: raid6: int64x1 gen() 4970 MB/s Jul 2 00:48:11.186284 kernel: raid6: using algorithm neonx8 gen() 15730 MB/s Jul 2 00:48:11.203267 kernel: raid6: .... xor() 11835 MB/s, rmw enabled Jul 2 00:48:11.203280 kernel: raid6: using neon recovery algorithm Jul 2 00:48:11.208257 kernel: xor: measuring software checksum speed Jul 2 00:48:11.209258 kernel: 8regs : 19869 MB/sec Jul 2 00:48:11.209271 kernel: 32regs : 19626 MB/sec Jul 2 00:48:11.210349 kernel: arm64_neon : 27098 MB/sec Jul 2 00:48:11.210359 kernel: xor: using function: arm64_neon (27098 MB/sec) Jul 2 00:48:11.266267 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jul 2 00:48:11.277554 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:48:11.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:11.278000 audit: BPF prog-id=7 op=LOAD Jul 2 00:48:11.278000 audit: BPF prog-id=8 op=LOAD Jul 2 00:48:11.286457 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:48:11.298366 systemd-udevd[429]: Using default interface naming scheme 'v252'. Jul 2 00:48:11.301602 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:48:11.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:11.303308 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 00:48:11.319623 dracut-pre-trigger[436]: rd.md=0: removing MD RAID activation Jul 2 00:48:11.346385 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:48:11.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:11.353454 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:48:11.388435 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:48:11.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:11.413257 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 2 00:48:11.422649 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 2 00:48:11.422743 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 00:48:11.422753 kernel: GPT:9289727 != 19775487 Jul 2 00:48:11.422761 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 00:48:11.422769 kernel: GPT:9289727 != 19775487 Jul 2 00:48:11.422777 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 00:48:11.422785 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:48:11.438285 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (481) Jul 2 00:48:11.441272 kernel: BTRFS: device fsid 7966a877-b206-421f-a16d-85fea061a717 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (476) Jul 2 00:48:11.441691 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 2 00:48:11.447191 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 2 00:48:11.450930 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 00:48:11.453731 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 2 00:48:11.454551 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 2 00:48:11.468444 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 00:48:11.474278 disk-uuid[499]: Primary Header is updated. Jul 2 00:48:11.474278 disk-uuid[499]: Secondary Entries is updated. Jul 2 00:48:11.474278 disk-uuid[499]: Secondary Header is updated. Jul 2 00:48:11.476765 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:48:12.490290 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:48:12.490973 disk-uuid[500]: The operation has completed successfully. Jul 2 00:48:12.518796 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 00:48:12.518895 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 00:48:12.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:12.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:12.527470 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 00:48:12.530506 sh[513]: Success Jul 2 00:48:12.554269 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 2 00:48:12.598598 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 00:48:12.600066 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 00:48:12.600815 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 00:48:12.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:12.618817 kernel: BTRFS info (device dm-0): first mount of filesystem 7966a877-b206-421f-a16d-85fea061a717 Jul 2 00:48:12.618859 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:48:12.618869 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 00:48:12.619622 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 00:48:12.620664 kernel: BTRFS info (device dm-0): using free space tree Jul 2 00:48:12.630155 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 00:48:12.630993 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 00:48:12.642429 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 00:48:12.643828 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 00:48:12.651387 kernel: BTRFS info (device vda6): first mount of filesystem 01dd2a72-8ea1-4734-94a7-a6e6a6dcdb27 Jul 2 00:48:12.651424 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:48:12.651433 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:48:12.661200 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 00:48:12.662407 kernel: BTRFS info (device vda6): last unmount of filesystem 01dd2a72-8ea1-4734-94a7-a6e6a6dcdb27 Jul 2 00:48:12.692317 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 00:48:12.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:12.697470 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 00:48:12.730615 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:48:12.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:12.732000 audit: BPF prog-id=9 op=LOAD Jul 2 00:48:12.739470 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:48:12.769330 systemd-networkd[696]: lo: Link UP Jul 2 00:48:12.769340 systemd-networkd[696]: lo: Gained carrier Jul 2 00:48:12.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:12.769906 systemd-networkd[696]: Enumeration completed Jul 2 00:48:12.770019 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:48:12.770323 systemd-networkd[696]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:48:12.770326 systemd-networkd[696]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:48:12.770921 systemd[1]: Reached target network.target - Network. Jul 2 00:48:12.772968 systemd-networkd[696]: eth0: Link UP Jul 2 00:48:12.772972 systemd-networkd[696]: eth0: Gained carrier Jul 2 00:48:12.772978 systemd-networkd[696]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:48:12.781416 systemd[1]: Starting iscsiuio.service - iSCSI UserSpace I/O driver... Jul 2 00:48:12.792510 systemd[1]: Started iscsiuio.service - iSCSI UserSpace I/O driver. Jul 2 00:48:12.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:12.792890 ignition[656]: Ignition 2.15.0 Jul 2 00:48:12.792897 ignition[656]: Stage: fetch-offline Jul 2 00:48:12.794694 systemd[1]: Starting iscsid.service - Open-iSCSI... Jul 2 00:48:12.792928 ignition[656]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:48:12.797647 iscsid[711]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 00:48:12.797647 iscsid[711]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Jul 2 00:48:12.797647 iscsid[711]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 00:48:12.797647 iscsid[711]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 00:48:12.797647 iscsid[711]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 00:48:12.797647 iscsid[711]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 00:48:12.797647 iscsid[711]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 00:48:12.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:12.795320 systemd-networkd[696]: eth0: DHCPv4 address 10.0.0.149/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 00:48:12.792943 ignition[656]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:48:12.800133 systemd[1]: Started iscsid.service - Open-iSCSI. Jul 2 00:48:12.793030 ignition[656]: parsed url from cmdline: "" Jul 2 00:48:12.802912 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 00:48:12.793033 ignition[656]: no config URL provided Jul 2 00:48:12.793038 ignition[656]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:48:12.793044 ignition[656]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:48:12.793068 ignition[656]: op(1): [started] loading QEMU firmware config module Jul 2 00:48:12.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:12.813803 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 00:48:12.793072 ignition[656]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 2 00:48:12.815295 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:48:12.816699 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:48:12.819133 ignition[656]: op(1): [finished] loading QEMU firmware config module Jul 2 00:48:12.818216 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:48:12.819157 ignition[656]: QEMU firmware config was not found. Ignoring... Jul 2 00:48:12.830452 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 00:48:12.838494 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:48:12.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:12.862495 ignition[656]: parsing config with SHA512: 26b41dcfbed4a4d6f315093414dbe6ae37ae994772b5cb9c840227f5ce60c191f8e602cc45a16e4ea064febc354e89c63c2278848aec5eda6b2d15833222488e Jul 2 00:48:12.867034 unknown[656]: fetched base config from "system" Jul 2 00:48:12.867044 unknown[656]: fetched user config from "qemu" Jul 2 00:48:12.867684 ignition[656]: fetch-offline: fetch-offline passed Jul 2 00:48:12.867746 ignition[656]: Ignition finished successfully Jul 2 00:48:12.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:12.868818 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:48:12.869977 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 2 00:48:12.879505 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 00:48:12.891058 ignition[725]: Ignition 2.15.0 Jul 2 00:48:12.892121 ignition[725]: Stage: kargs Jul 2 00:48:12.892289 ignition[725]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:48:12.892299 ignition[725]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:48:12.894894 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 00:48:12.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:12.893331 ignition[725]: kargs: kargs passed Jul 2 00:48:12.893380 ignition[725]: Ignition finished successfully Jul 2 00:48:12.902483 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 00:48:12.912027 ignition[733]: Ignition 2.15.0 Jul 2 00:48:12.912037 ignition[733]: Stage: disks Jul 2 00:48:12.912134 ignition[733]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:48:12.912143 ignition[733]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:48:12.914012 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 00:48:12.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:12.913104 ignition[733]: disks: disks passed Jul 2 00:48:12.915551 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 00:48:12.913149 ignition[733]: Ignition finished successfully Jul 2 00:48:12.916956 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 00:48:12.918143 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:48:12.919617 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:48:12.920869 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:48:12.933450 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 00:48:12.943968 systemd-fsck[743]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 00:48:12.947185 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 00:48:12.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:12.948916 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 00:48:12.997993 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Quota mode: none. Jul 2 00:48:12.998795 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 00:48:12.999609 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 00:48:13.011388 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:48:13.013817 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 00:48:13.014676 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 2 00:48:13.014715 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 00:48:13.014741 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:48:13.022370 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (749) Jul 2 00:48:13.016587 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 00:48:13.025228 kernel: BTRFS info (device vda6): first mount of filesystem 01dd2a72-8ea1-4734-94a7-a6e6a6dcdb27 Jul 2 00:48:13.025256 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:48:13.025266 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:48:13.019220 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 00:48:13.028548 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:48:13.077362 initrd-setup-root[773]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 00:48:13.080520 initrd-setup-root[780]: cut: /sysroot/etc/group: No such file or directory Jul 2 00:48:13.084293 initrd-setup-root[787]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 00:48:13.087765 initrd-setup-root[794]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 00:48:13.159481 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 00:48:13.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:13.173475 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 00:48:13.175073 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 00:48:13.178727 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 00:48:13.180851 kernel: BTRFS info (device vda6): last unmount of filesystem 01dd2a72-8ea1-4734-94a7-a6e6a6dcdb27 Jul 2 00:48:13.193757 ignition[860]: INFO : Ignition 2.15.0 Jul 2 00:48:13.193757 ignition[860]: INFO : Stage: mount Jul 2 00:48:13.195445 ignition[860]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:48:13.195445 ignition[860]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:48:13.195445 ignition[860]: INFO : mount: mount passed Jul 2 00:48:13.195445 ignition[860]: INFO : Ignition finished successfully Jul 2 00:48:13.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:13.197915 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 00:48:13.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:13.199226 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 00:48:13.207493 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 00:48:14.008450 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:48:14.014266 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (873) Jul 2 00:48:14.018461 kernel: BTRFS info (device vda6): first mount of filesystem 01dd2a72-8ea1-4734-94a7-a6e6a6dcdb27 Jul 2 00:48:14.018478 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:48:14.018488 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:48:14.021539 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:48:14.045554 ignition[891]: INFO : Ignition 2.15.0 Jul 2 00:48:14.045554 ignition[891]: INFO : Stage: files Jul 2 00:48:14.046822 ignition[891]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:48:14.046822 ignition[891]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:48:14.048482 ignition[891]: DEBUG : files: compiled without relabeling support, skipping Jul 2 00:48:14.048482 ignition[891]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 00:48:14.048482 ignition[891]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 00:48:14.051481 ignition[891]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 00:48:14.051481 ignition[891]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 00:48:14.051481 ignition[891]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 00:48:14.051481 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 00:48:14.051481 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 00:48:14.051481 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 00:48:14.050383 unknown[891]: wrote ssh authorized keys file for user: core Jul 2 00:48:14.059095 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 2 00:48:14.073011 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 00:48:14.097410 systemd-networkd[696]: eth0: Gained IPv6LL Jul 2 00:48:14.123454 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 00:48:14.125114 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 2 00:48:14.125114 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 00:48:14.125114 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:48:14.125114 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:48:14.125114 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:48:14.125114 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:48:14.125114 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:48:14.125114 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:48:14.125114 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:48:14.125114 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:48:14.125114 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 00:48:14.125114 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 00:48:14.125114 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 00:48:14.125114 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-arm64.raw: attempt #1 Jul 2 00:48:14.427613 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 2 00:48:14.681945 ignition[891]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 00:48:14.681945 ignition[891]: INFO : files: op(c): [started] processing unit "containerd.service" Jul 2 00:48:14.684543 ignition[891]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 00:48:14.684543 ignition[891]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 00:48:14.684543 ignition[891]: INFO : files: op(c): [finished] processing unit "containerd.service" Jul 2 00:48:14.684543 ignition[891]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jul 2 00:48:14.684543 ignition[891]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:48:14.684543 ignition[891]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:48:14.684543 ignition[891]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jul 2 00:48:14.684543 ignition[891]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jul 2 00:48:14.684543 ignition[891]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 00:48:14.684543 ignition[891]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 00:48:14.684543 ignition[891]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jul 2 00:48:14.684543 ignition[891]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Jul 2 00:48:14.684543 ignition[891]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 00:48:14.700291 ignition[891]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 00:48:14.700291 ignition[891]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Jul 2 00:48:14.700291 ignition[891]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Jul 2 00:48:14.700291 ignition[891]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 00:48:14.700291 ignition[891]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:48:14.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:14.707579 ignition[891]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:48:14.707579 ignition[891]: INFO : files: files passed Jul 2 00:48:14.707579 ignition[891]: INFO : Ignition finished successfully Jul 2 00:48:14.702815 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 00:48:14.715468 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 00:48:14.716962 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 00:48:14.718888 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 00:48:14.718998 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 00:48:14.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:14.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:14.721177 initrd-setup-root-after-ignition[916]: grep: /sysroot/oem/oem-release: No such file or directory Jul 2 00:48:14.722914 initrd-setup-root-after-ignition[918]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:48:14.722914 initrd-setup-root-after-ignition[918]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:48:14.725311 initrd-setup-root-after-ignition[922]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:48:14.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:14.724488 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:48:14.726114 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 00:48:14.729575 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 00:48:14.742730 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 00:48:14.742836 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 00:48:14.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:14.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:14.744395 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 00:48:14.745889 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 00:48:14.747332 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 00:48:14.748183 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 00:48:14.758718 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:48:14.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:14.760362 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 00:48:14.768153 systemd[1]: Stopped target network.target - Network. Jul 2 00:48:14.768958 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:48:14.770157 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:48:14.771636 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 00:48:14.773015 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 00:48:14.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:14.773128 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:48:14.774415 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 00:48:14.775733 systemd[1]: Stopped target basic.target - Basic System. Jul 2 00:48:14.777067 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 00:48:14.778512 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:48:14.779799 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 00:48:14.781387 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 00:48:14.782886 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:48:14.784480 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 00:48:14.785808 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 00:48:14.787234 systemd[1]: Stopped target local-fs-pre.target - Preparation for Local File Systems. Jul 2 00:48:14.788650 systemd[1]: Stopped target swap.target - Swaps. Jul 2 00:48:14.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:14.789793 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 00:48:14.789910 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:48:14.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:14.791398 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:48:14.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:14.792593 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 00:48:14.792693 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 00:48:14.793993 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 00:48:14.794089 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:48:14.795524 systemd[1]: Stopped target paths.target - Path Units. Jul 2 00:48:14.796709 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 00:48:14.798294 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:48:14.799628 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 00:48:14.800983 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 00:48:14.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:14.802181 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 00:48:14.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:14.802273 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:48:14.803566 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 00:48:14.803629 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:48:14.805256 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 00:48:14.805362 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:48:14.806574 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 00:48:14.806667 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 00:48:14.816573 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 00:48:14.818165 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 00:48:14.819143 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 00:48:14.821469 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 00:48:14.822194 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 00:48:14.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:14.822335 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:48:14.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:14.823825 systemd-networkd[696]: eth0: DHCPv6 lease lost Jul 2 00:48:14.824909 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 00:48:14.825015 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:48:14.829566 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 00:48:14.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:14.832093 ignition[936]: INFO : Ignition 2.15.0 Jul 2 00:48:14.832093 ignition[936]: INFO : Stage: umount Jul 2 00:48:14.832093 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:48:14.832093 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:48:14.830220 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 00:48:14.837137 ignition[936]: INFO : umount: umount passed Jul 2 00:48:14.837137 ignition[936]: INFO : Ignition finished successfully Jul 2 00:48:14.830348 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 00:48:14.832040 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 00:48:14.832109 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:48:14.841430 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 00:48:14.841000 audit: BPF prog-id=9 op=UNLOAD Jul 2 00:48:14.842226 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 00:48:14.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:14.842326 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:48:14.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:14.844901 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:48:14.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:14.844960 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:48:14.847443 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 00:48:14.847486 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 00:48:14.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:14.849170 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 00:48:14.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:14.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:14.849677 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 00:48:14.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:14.849775 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 00:48:14.851596 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 00:48:14.851679 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 00:48:14.852956 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 00:48:14.853038 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 00:48:14.861808 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 00:48:14.861907 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 00:48:14.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:14.862000 audit: BPF prog-id=6 op=UNLOAD Jul 2 00:48:14.864320 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 00:48:14.864473 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 00:48:14.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:14.865988 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 00:48:14.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:14.866032 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 00:48:14.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:14.867177 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 00:48:14.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:14.867212 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 00:48:14.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:14.868501 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 00:48:14.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:14.868540 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 00:48:14.870001 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 00:48:14.870042 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 00:48:14.871421 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 00:48:14.871458 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:48:14.874893 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:48:14.879111 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 2 00:48:14.883165 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 00:48:14.883313 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:48:14.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:14.884926 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 00:48:14.884970 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 00:48:14.886484 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 00:48:14.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:14.886513 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:48:14.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:14.887911 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 00:48:14.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:14.887956 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:48:14.889409 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 00:48:14.889445 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 00:48:14.890828 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:48:14.890862 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:48:14.904482 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 00:48:14.905220 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 00:48:14.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:14.905303 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:48:14.907606 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 00:48:14.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:14.907652 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:48:14.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:14.909301 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:48:14.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:14.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:14.909340 systemd[1]: Stopped systemd-vconsole-setup.service - Setup Virtual Console. Jul 2 00:48:14.911098 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 00:48:14.911208 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 00:48:14.912613 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 00:48:14.914750 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 00:48:14.921202 systemd[1]: Switching root. Jul 2 00:48:14.922000 audit: BPF prog-id=8 op=UNLOAD Jul 2 00:48:14.922000 audit: BPF prog-id=7 op=UNLOAD Jul 2 00:48:14.922000 audit: BPF prog-id=5 op=UNLOAD Jul 2 00:48:14.922000 audit: BPF prog-id=4 op=UNLOAD Jul 2 00:48:14.922000 audit: BPF prog-id=3 op=UNLOAD Jul 2 00:48:14.935177 iscsid[711]: iscsid shutting down. Jul 2 00:48:14.935765 systemd-journald[224]: Received SIGTERM from PID 1 (n/a). Jul 2 00:48:14.935841 systemd-journald[224]: Journal stopped Jul 2 00:48:15.651294 kernel: SELinux: Permission cmd in class io_uring not defined in policy. Jul 2 00:48:15.651348 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 00:48:15.651359 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 00:48:15.651368 kernel: SELinux: policy capability open_perms=1 Jul 2 00:48:15.651378 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 00:48:15.651387 kernel: SELinux: policy capability always_check_network=0 Jul 2 00:48:15.651396 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 00:48:15.651405 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 00:48:15.651418 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 00:48:15.651427 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 00:48:15.651438 systemd[1]: Successfully loaded SELinux policy in 35.270ms. Jul 2 00:48:15.651460 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.366ms. Jul 2 00:48:15.651472 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 00:48:15.651484 systemd[1]: Detected virtualization kvm. Jul 2 00:48:15.651495 systemd[1]: Detected architecture arm64. Jul 2 00:48:15.651510 systemd[1]: Detected first boot. Jul 2 00:48:15.651521 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:48:15.651531 systemd[1]: Populated /etc with preset unit settings. Jul 2 00:48:15.651541 systemd[1]: Queued start job for default target multi-user.target. Jul 2 00:48:15.651551 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 2 00:48:15.651561 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 00:48:15.651571 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 00:48:15.651581 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 00:48:15.651591 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 00:48:15.651603 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 00:48:15.651613 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 00:48:15.651624 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 00:48:15.651634 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 00:48:15.651645 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:48:15.651656 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 00:48:15.651669 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 00:48:15.651680 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 00:48:15.651695 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 00:48:15.651706 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:48:15.651717 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:48:15.651728 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:48:15.651738 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:48:15.651748 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 00:48:15.651759 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 00:48:15.651769 systemd[1]: Listening on systemd-initctl.socket - initctl Compatibility Named Pipe. Jul 2 00:48:15.651779 kernel: kauditd_printk_skb: 72 callbacks suppressed Jul 2 00:48:15.651792 kernel: audit: type=1400 audit(1719881295.557:83): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 00:48:15.651802 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jul 2 00:48:15.651813 kernel: audit: type=1335 audit(1719881295.558:84): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 2 00:48:15.651823 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 00:48:15.651833 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 00:48:15.651843 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:48:15.651853 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:48:15.651864 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:48:15.651876 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 00:48:15.651886 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 00:48:15.651896 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 00:48:15.651906 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 00:48:15.651916 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 00:48:15.651926 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 00:48:15.651944 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 00:48:15.651954 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 00:48:15.651964 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:48:15.651976 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:48:15.651986 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 00:48:15.651997 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:48:15.652007 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:48:15.652017 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:48:15.652098 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 00:48:15.652116 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:48:15.652127 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 00:48:15.652140 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 2 00:48:15.652150 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Jul 2 00:48:15.652160 kernel: fuse: init (API version 7.37) Jul 2 00:48:15.652170 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:48:15.652180 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:48:15.652190 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 00:48:15.652200 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 00:48:15.652211 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:48:15.652221 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 00:48:15.652233 kernel: ACPI: bus type drm_connector registered Jul 2 00:48:15.652357 kernel: audit: type=1305 audit(1719881295.649:85): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 00:48:15.652434 kernel: audit: type=1300 audit(1719881295.649:85): arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffe2de5e70 a2=4000 a3=1 items=0 ppid=1 pid=1066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:15.652452 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 00:48:15.652467 systemd-journald[1066]: Journal started Jul 2 00:48:15.652517 systemd-journald[1066]: Runtime Journal (/run/log/journal/92b47a689c40483281c81e91b180264f) is 6.0M, max 48.6M, 42.6M free. Jul 2 00:48:15.558000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 2 00:48:15.649000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 00:48:15.649000 audit[1066]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffe2de5e70 a2=4000 a3=1 items=0 ppid=1 pid=1066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:15.663716 kernel: audit: type=1327 audit(1719881295.649:85): proctitle="/usr/lib/systemd/systemd-journald" Jul 2 00:48:15.663784 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:48:15.663811 kernel: audit: type=1130 audit(1719881295.660:86): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:15.649000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 00:48:15.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:15.661024 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 00:48:15.663683 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 00:48:15.664613 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 00:48:15.665731 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 00:48:15.667324 kernel: loop: module loaded Jul 2 00:48:15.667428 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:48:15.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:15.668441 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 00:48:15.668602 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 00:48:15.670726 kernel: audit: type=1130 audit(1719881295.667:87): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:15.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:15.671352 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:48:15.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:15.671517 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:48:15.673266 kernel: audit: type=1130 audit(1719881295.670:88): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:15.673295 kernel: audit: type=1131 audit(1719881295.670:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:15.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:15.676134 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:48:15.676298 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:48:15.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:15.679274 kernel: audit: type=1130 audit(1719881295.675:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:15.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:15.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:15.679513 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:48:15.679666 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:48:15.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:15.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:15.680985 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 00:48:15.681135 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 00:48:15.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:15.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:15.682162 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:48:15.682341 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:48:15.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:15.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:15.683520 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:48:15.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:15.684988 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 00:48:15.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:15.686124 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 00:48:15.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:15.687439 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 00:48:15.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:15.688665 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 00:48:15.703422 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 00:48:15.705522 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 00:48:15.706296 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 00:48:15.708987 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 00:48:15.711275 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 00:48:15.712097 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:48:15.713777 systemd[1]: Starting systemd-random-seed.service - Load/Save Random Seed... Jul 2 00:48:15.714657 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:48:15.716004 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:48:15.719452 systemd-journald[1066]: Time spent on flushing to /var/log/journal/92b47a689c40483281c81e91b180264f is 12.442ms for 944 entries. Jul 2 00:48:15.719452 systemd-journald[1066]: System Journal (/var/log/journal/92b47a689c40483281c81e91b180264f) is 8.0M, max 195.6M, 187.6M free. Jul 2 00:48:15.735657 systemd-journald[1066]: Received client request to flush runtime journal. Jul 2 00:48:15.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:15.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:15.718142 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 00:48:15.722725 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:48:15.723706 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 00:48:15.724634 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 00:48:15.725803 systemd[1]: Finished systemd-random-seed.service - Load/Save Random Seed. Jul 2 00:48:15.726792 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 00:48:15.735740 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 00:48:15.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:15.737223 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 00:48:15.739443 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:48:15.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:15.744138 udevadm[1127]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 2 00:48:15.745709 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 00:48:15.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:15.759596 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:48:15.776644 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:48:15.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.142581 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 00:48:16.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.154556 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:48:16.173388 systemd-udevd[1137]: Using default interface naming scheme 'v252'. Jul 2 00:48:16.184627 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:48:16.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.199675 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:48:16.215033 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jul 2 00:48:16.222278 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1146) Jul 2 00:48:16.230269 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1157) Jul 2 00:48:16.232467 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 00:48:16.258898 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 00:48:16.285606 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 00:48:16.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.325792 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 00:48:16.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.328693 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 00:48:16.357825 lvm[1171]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:48:16.369400 systemd-networkd[1144]: lo: Link UP Jul 2 00:48:16.369414 systemd-networkd[1144]: lo: Gained carrier Jul 2 00:48:16.369748 systemd-networkd[1144]: Enumeration completed Jul 2 00:48:16.369870 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:48:16.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.371120 systemd-networkd[1144]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:48:16.371127 systemd-networkd[1144]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:48:16.372298 systemd-networkd[1144]: eth0: Link UP Jul 2 00:48:16.372303 systemd-networkd[1144]: eth0: Gained carrier Jul 2 00:48:16.372314 systemd-networkd[1144]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:48:16.386476 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 00:48:16.392211 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 00:48:16.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.393479 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:48:16.396025 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 00:48:16.397399 systemd-networkd[1144]: eth0: DHCPv4 address 10.0.0.149/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 00:48:16.399924 lvm[1175]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:48:16.429304 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 00:48:16.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.430550 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 00:48:16.431687 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 00:48:16.431718 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:48:16.432734 systemd[1]: Reached target machines.target - Containers. Jul 2 00:48:16.445492 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 00:48:16.446726 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:48:16.446796 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:48:16.448217 systemd[1]: Starting systemd-boot-update.service - Automatic Boot Loader Update... Jul 2 00:48:16.450673 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 00:48:16.453289 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 00:48:16.455800 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 00:48:16.457318 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1178 (bootctl) Jul 2 00:48:16.458685 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM... Jul 2 00:48:16.462321 kernel: loop0: detected capacity change from 0 to 59648 Jul 2 00:48:16.467657 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 00:48:16.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.483264 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 00:48:16.524660 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 00:48:16.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.536267 kernel: loop1: detected capacity change from 0 to 113264 Jul 2 00:48:16.537292 systemd-fsck[1186]: fsck.fat 4.2 (2021-01-31) Jul 2 00:48:16.537292 systemd-fsck[1186]: /dev/vda1: 242 files, 114659/258078 clusters Jul 2 00:48:16.540980 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM. Jul 2 00:48:16.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.575266 kernel: loop2: detected capacity change from 0 to 193208 Jul 2 00:48:16.604269 kernel: loop3: detected capacity change from 0 to 59648 Jul 2 00:48:16.611256 kernel: loop4: detected capacity change from 0 to 113264 Jul 2 00:48:16.618259 kernel: loop5: detected capacity change from 0 to 193208 Jul 2 00:48:16.623206 (sd-sysext)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 2 00:48:16.623601 (sd-sysext)[1195]: Merged extensions into '/usr'. Jul 2 00:48:16.625194 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 00:48:16.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.634497 systemd[1]: Starting ensure-sysext.service... Jul 2 00:48:16.636918 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:48:16.649917 systemd-tmpfiles[1198]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 00:48:16.651077 systemd-tmpfiles[1198]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 00:48:16.651394 systemd-tmpfiles[1198]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 00:48:16.652441 systemd-tmpfiles[1198]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 00:48:16.656839 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 00:48:16.657686 systemd[1]: Reloading. Jul 2 00:48:16.690711 ldconfig[1177]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 00:48:16.782228 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:48:16.824366 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 00:48:16.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.839131 systemd[1]: Mounting boot.mount - Boot partition... Jul 2 00:48:16.844973 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:48:16.846485 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:48:16.848489 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:48:16.850575 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:48:16.851455 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:48:16.851584 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:48:16.854006 systemd[1]: Mounted boot.mount - Boot partition. Jul 2 00:48:16.855077 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:48:16.855224 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:48:16.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.856557 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:48:16.856694 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:48:16.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.858294 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:48:16.858448 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:48:16.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.861422 systemd[1]: Finished systemd-boot-update.service - Automatic Boot Loader Update. Jul 2 00:48:16.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.862761 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:48:16.876743 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:48:16.879049 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:48:16.881091 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:48:16.881915 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:48:16.882086 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:48:16.883039 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:48:16.883193 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:48:16.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.884613 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:48:16.884753 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:48:16.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.886431 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:48:16.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.887761 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:48:16.887937 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:48:16.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.905631 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:48:16.908568 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 00:48:16.909800 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:48:16.911667 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:48:16.914187 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:48:16.916588 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:48:16.918742 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:48:16.921418 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:48:16.921551 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:48:16.923069 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 00:48:16.926290 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:48:16.929255 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 2 00:48:16.932596 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 00:48:16.934832 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:48:16.935008 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:48:16.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.936545 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:48:16.936000 audit[1296]: SYSTEM_BOOT pid=1296 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.936748 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:48:16.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.938064 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:48:16.938218 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:48:16.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.939870 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:48:16.940055 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:48:16.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.943867 systemd[1]: Finished ensure-sysext.service. Jul 2 00:48:16.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.945741 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:48:16.945846 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:48:16.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.950452 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 00:48:16.951695 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 00:48:16.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.954586 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 00:48:16.962794 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 00:48:16.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.973218 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 00:48:16.974133 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:48:16.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:16.977841 augenrules[1311]: No rules Jul 2 00:48:16.977000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 00:48:16.977000 audit[1311]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff1edf160 a2=420 a3=0 items=0 ppid=1277 pid=1311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:16.977000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 00:48:16.978670 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:48:17.007107 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 2 00:48:17.007924 systemd-timesyncd[1295]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 2 00:48:17.007984 systemd-timesyncd[1295]: Initial clock synchronization to Tue 2024-07-02 00:48:17.207840 UTC. Jul 2 00:48:17.008430 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 00:48:17.010579 systemd-resolved[1294]: Positive Trust Anchors: Jul 2 00:48:17.010811 systemd-resolved[1294]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:48:17.010887 systemd-resolved[1294]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 00:48:17.016175 systemd-resolved[1294]: Defaulting to hostname 'linux'. Jul 2 00:48:17.019628 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:48:17.020492 systemd[1]: Reached target network.target - Network. Jul 2 00:48:17.021113 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:48:17.021968 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:48:17.022847 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 00:48:17.023745 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 00:48:17.024691 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 00:48:17.025591 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 00:48:17.026382 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 00:48:17.027132 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 00:48:17.027160 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:48:17.027843 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:48:17.029257 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 00:48:17.031722 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 00:48:17.033402 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 00:48:17.034283 systemd[1]: systemd-pcrphase-sysinit.service - TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:48:17.038228 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 00:48:17.039062 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:48:17.039788 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:48:17.040700 systemd[1]: System is tainted: cgroupsv1 Jul 2 00:48:17.040752 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:48:17.040776 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:48:17.042230 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 00:48:17.044370 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 00:48:17.046503 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 00:48:17.048821 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 00:48:17.049681 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 00:48:17.051672 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 00:48:17.054101 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 00:48:17.054858 jq[1324]: false Jul 2 00:48:17.057885 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 00:48:17.060705 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 00:48:17.063825 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 00:48:17.065171 systemd[1]: systemd-pcrphase.service - TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:48:17.065239 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 00:48:17.066748 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 00:48:17.070083 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 00:48:17.072794 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 00:48:17.075746 jq[1339]: true Jul 2 00:48:17.073061 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 00:48:17.074366 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 00:48:17.074657 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 00:48:17.078366 extend-filesystems[1325]: Found loop3 Jul 2 00:48:17.078366 extend-filesystems[1325]: Found loop4 Jul 2 00:48:17.078366 extend-filesystems[1325]: Found loop5 Jul 2 00:48:17.078366 extend-filesystems[1325]: Found vda Jul 2 00:48:17.078366 extend-filesystems[1325]: Found vda1 Jul 2 00:48:17.078366 extend-filesystems[1325]: Found vda2 Jul 2 00:48:17.078366 extend-filesystems[1325]: Found vda3 Jul 2 00:48:17.078366 extend-filesystems[1325]: Found usr Jul 2 00:48:17.078366 extend-filesystems[1325]: Found vda4 Jul 2 00:48:17.078366 extend-filesystems[1325]: Found vda6 Jul 2 00:48:17.078366 extend-filesystems[1325]: Found vda7 Jul 2 00:48:17.078366 extend-filesystems[1325]: Found vda9 Jul 2 00:48:17.078366 extend-filesystems[1325]: Checking size of /dev/vda9 Jul 2 00:48:17.103852 jq[1345]: true Jul 2 00:48:17.113393 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1146) Jul 2 00:48:17.088097 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 00:48:17.103734 dbus-daemon[1323]: [system] SELinux support is enabled Jul 2 00:48:17.113775 extend-filesystems[1325]: Resized partition /dev/vda9 Jul 2 00:48:17.088376 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 00:48:17.113424 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 00:48:17.116083 extend-filesystems[1360]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 00:48:17.117743 tar[1343]: linux-arm64/helm Jul 2 00:48:17.121056 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 00:48:17.121094 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 00:48:17.122142 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 00:48:17.122157 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 00:48:17.124262 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 2 00:48:17.139203 update_engine[1336]: I0702 00:48:17.139075 1336 main.cc:92] Flatcar Update Engine starting Jul 2 00:48:17.141197 systemd[1]: Started update-engine.service - Update Engine. Jul 2 00:48:17.141305 update_engine[1336]: I0702 00:48:17.141253 1336 update_check_scheduler.cc:74] Next update check in 11m22s Jul 2 00:48:17.142655 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 00:48:17.146535 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 00:48:17.183599 systemd-logind[1334]: Watching system buttons on /dev/input/event0 (Power Button) Jul 2 00:48:17.186013 systemd-logind[1334]: New seat seat0. Jul 2 00:48:17.194505 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 00:48:17.209271 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 2 00:48:17.224040 locksmithd[1374]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 00:48:17.231186 extend-filesystems[1360]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 00:48:17.231186 extend-filesystems[1360]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 00:48:17.231186 extend-filesystems[1360]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 2 00:48:17.235258 extend-filesystems[1325]: Resized filesystem in /dev/vda9 Jul 2 00:48:17.231985 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 00:48:17.232229 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 00:48:17.237118 bash[1375]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:48:17.238039 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 00:48:17.239485 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 2 00:48:17.366286 containerd[1351]: time="2024-07-02T00:48:17.366169160Z" level=info msg="starting containerd" revision=99b8088b873ba42b788f29ccd0dc26ebb6952f1e version=v1.7.13 Jul 2 00:48:17.389808 containerd[1351]: time="2024-07-02T00:48:17.389752120Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 00:48:17.389808 containerd[1351]: time="2024-07-02T00:48:17.389805440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:48:17.391939 containerd[1351]: time="2024-07-02T00:48:17.391894920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.1.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:48:17.391990 containerd[1351]: time="2024-07-02T00:48:17.391942040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:48:17.392252 containerd[1351]: time="2024-07-02T00:48:17.392222840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:48:17.392313 containerd[1351]: time="2024-07-02T00:48:17.392255960Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 00:48:17.392349 containerd[1351]: time="2024-07-02T00:48:17.392331640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 00:48:17.392421 containerd[1351]: time="2024-07-02T00:48:17.392403600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:48:17.392452 containerd[1351]: time="2024-07-02T00:48:17.392420600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 00:48:17.392495 containerd[1351]: time="2024-07-02T00:48:17.392480640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:48:17.392695 containerd[1351]: time="2024-07-02T00:48:17.392675680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 00:48:17.392729 containerd[1351]: time="2024-07-02T00:48:17.392699000Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 00:48:17.392729 containerd[1351]: time="2024-07-02T00:48:17.392709320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:48:17.392867 containerd[1351]: time="2024-07-02T00:48:17.392847880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:48:17.392867 containerd[1351]: time="2024-07-02T00:48:17.392864880Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 00:48:17.392940 containerd[1351]: time="2024-07-02T00:48:17.392917560Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 00:48:17.392970 containerd[1351]: time="2024-07-02T00:48:17.392940400Z" level=info msg="metadata content store policy set" policy=shared Jul 2 00:48:17.396793 containerd[1351]: time="2024-07-02T00:48:17.396758440Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 00:48:17.396793 containerd[1351]: time="2024-07-02T00:48:17.396794680Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 00:48:17.396913 containerd[1351]: time="2024-07-02T00:48:17.396808520Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 00:48:17.396913 containerd[1351]: time="2024-07-02T00:48:17.396844600Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 00:48:17.396913 containerd[1351]: time="2024-07-02T00:48:17.396861040Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 00:48:17.396913 containerd[1351]: time="2024-07-02T00:48:17.396871480Z" level=info msg="NRI interface is disabled by configuration." Jul 2 00:48:17.396913 containerd[1351]: time="2024-07-02T00:48:17.396883960Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 00:48:17.397095 containerd[1351]: time="2024-07-02T00:48:17.397075720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 00:48:17.397123 containerd[1351]: time="2024-07-02T00:48:17.397098360Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 00:48:17.397123 containerd[1351]: time="2024-07-02T00:48:17.397115240Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 00:48:17.397160 containerd[1351]: time="2024-07-02T00:48:17.397129720Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 00:48:17.397160 containerd[1351]: time="2024-07-02T00:48:17.397143440Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 00:48:17.397196 containerd[1351]: time="2024-07-02T00:48:17.397160560Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 00:48:17.397196 containerd[1351]: time="2024-07-02T00:48:17.397174000Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 00:48:17.397196 containerd[1351]: time="2024-07-02T00:48:17.397187280Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 00:48:17.397273 containerd[1351]: time="2024-07-02T00:48:17.397201040Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 00:48:17.397273 containerd[1351]: time="2024-07-02T00:48:17.397215560Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 00:48:17.397273 containerd[1351]: time="2024-07-02T00:48:17.397228000Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 00:48:17.397273 containerd[1351]: time="2024-07-02T00:48:17.397255600Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 00:48:17.397366 containerd[1351]: time="2024-07-02T00:48:17.397347480Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 00:48:17.398821 containerd[1351]: time="2024-07-02T00:48:17.398090440Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 00:48:17.398868 containerd[1351]: time="2024-07-02T00:48:17.398842560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 00:48:17.398905 containerd[1351]: time="2024-07-02T00:48:17.398866080Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 00:48:17.398905 containerd[1351]: time="2024-07-02T00:48:17.398900440Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 00:48:17.399078 containerd[1351]: time="2024-07-02T00:48:17.399057640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 00:48:17.399112 containerd[1351]: time="2024-07-02T00:48:17.399082080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 00:48:17.399310 containerd[1351]: time="2024-07-02T00:48:17.399100800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 00:48:17.399310 containerd[1351]: time="2024-07-02T00:48:17.399159320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 00:48:17.399310 containerd[1351]: time="2024-07-02T00:48:17.399175760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 00:48:17.399310 containerd[1351]: time="2024-07-02T00:48:17.399192240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 00:48:17.399310 containerd[1351]: time="2024-07-02T00:48:17.399211920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 00:48:17.399310 containerd[1351]: time="2024-07-02T00:48:17.399227680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 00:48:17.399310 containerd[1351]: time="2024-07-02T00:48:17.399266080Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 00:48:17.399447 containerd[1351]: time="2024-07-02T00:48:17.399409360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 00:48:17.399468 containerd[1351]: time="2024-07-02T00:48:17.399444000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 00:48:17.399468 containerd[1351]: time="2024-07-02T00:48:17.399460760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 00:48:17.399509 containerd[1351]: time="2024-07-02T00:48:17.399476160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 00:48:17.399509 containerd[1351]: time="2024-07-02T00:48:17.399492400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 00:48:17.399544 containerd[1351]: time="2024-07-02T00:48:17.399511600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 00:48:17.399544 containerd[1351]: time="2024-07-02T00:48:17.399528600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 00:48:17.399581 containerd[1351]: time="2024-07-02T00:48:17.399544720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 00:48:17.399886 containerd[1351]: time="2024-07-02T00:48:17.399817880Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 00:48:17.400285 containerd[1351]: time="2024-07-02T00:48:17.399901440Z" level=info msg="Connect containerd service" Jul 2 00:48:17.400285 containerd[1351]: time="2024-07-02T00:48:17.399947880Z" level=info msg="using legacy CRI server" Jul 2 00:48:17.400285 containerd[1351]: time="2024-07-02T00:48:17.399958280Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 00:48:17.400955 containerd[1351]: time="2024-07-02T00:48:17.400918480Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 00:48:17.401685 containerd[1351]: time="2024-07-02T00:48:17.401656080Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:48:17.401739 containerd[1351]: time="2024-07-02T00:48:17.401704800Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 00:48:17.401739 containerd[1351]: time="2024-07-02T00:48:17.401722200Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 00:48:17.401739 containerd[1351]: time="2024-07-02T00:48:17.401733120Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 00:48:17.401797 containerd[1351]: time="2024-07-02T00:48:17.401742800Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin" Jul 2 00:48:17.401947 containerd[1351]: time="2024-07-02T00:48:17.401892520Z" level=info msg="Start subscribing containerd event" Jul 2 00:48:17.402020 containerd[1351]: time="2024-07-02T00:48:17.402003760Z" level=info msg="Start recovering state" Jul 2 00:48:17.402103 containerd[1351]: time="2024-07-02T00:48:17.402087920Z" level=info msg="Start event monitor" Jul 2 00:48:17.402312 containerd[1351]: time="2024-07-02T00:48:17.402104400Z" level=info msg="Start snapshots syncer" Jul 2 00:48:17.402346 containerd[1351]: time="2024-07-02T00:48:17.402320680Z" level=info msg="Start cni network conf syncer for default" Jul 2 00:48:17.402377 containerd[1351]: time="2024-07-02T00:48:17.402287120Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 00:48:17.402438 containerd[1351]: time="2024-07-02T00:48:17.402422480Z" level=info msg="Start streaming server" Jul 2 00:48:17.402501 containerd[1351]: time="2024-07-02T00:48:17.402424640Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 00:48:17.402598 containerd[1351]: time="2024-07-02T00:48:17.402582440Z" level=info msg="containerd successfully booted in 0.039579s" Jul 2 00:48:17.402680 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 00:48:17.510442 tar[1343]: linux-arm64/LICENSE Jul 2 00:48:17.510540 tar[1343]: linux-arm64/README.md Jul 2 00:48:17.523367 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 00:48:17.809436 systemd-networkd[1144]: eth0: Gained IPv6LL Jul 2 00:48:17.811642 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 00:48:17.812817 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 00:48:17.821624 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 2 00:48:17.824502 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:48:17.826987 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 00:48:17.836827 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 2 00:48:17.837086 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 2 00:48:17.838221 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 00:48:17.845045 sshd_keygen[1350]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 00:48:17.854638 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 00:48:17.865171 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 00:48:17.878865 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 00:48:17.883900 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 00:48:17.884148 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 00:48:17.887356 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 00:48:17.895821 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 00:48:17.907847 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 00:48:17.911392 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 2 00:48:17.912969 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 00:48:18.337087 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:48:18.339363 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 00:48:18.343113 systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP... Jul 2 00:48:18.350757 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 00:48:18.351054 systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP. Jul 2 00:48:18.352441 systemd[1]: Startup finished in 4.897s (kernel) + 3.356s (userspace) = 8.254s. Jul 2 00:48:18.863531 kubelet[1441]: E0702 00:48:18.863439 1441 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:48:18.865646 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:48:18.865794 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:48:23.332584 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 00:48:23.341613 systemd[1]: Started sshd@0-10.0.0.149:22-10.0.0.1:51618.service - OpenSSH per-connection server daemon (10.0.0.1:51618). Jul 2 00:48:23.398930 sshd[1452]: Accepted publickey for core from 10.0.0.1 port 51618 ssh2: RSA SHA256:Kp5j1k40JRjOvdBxgA97BAtl42hDR/cCrPEvDSx0HPk Jul 2 00:48:23.400663 sshd[1452]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:48:23.409638 systemd-logind[1334]: New session 1 of user core. Jul 2 00:48:23.411507 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 00:48:23.425569 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 00:48:23.438067 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 00:48:23.439687 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 00:48:23.443122 (systemd)[1457]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:48:23.517984 systemd[1457]: Queued start job for default target default.target. Jul 2 00:48:23.518202 systemd[1457]: Reached target paths.target - Paths. Jul 2 00:48:23.518216 systemd[1457]: Reached target sockets.target - Sockets. Jul 2 00:48:23.518226 systemd[1457]: Reached target timers.target - Timers. Jul 2 00:48:23.518247 systemd[1457]: Reached target basic.target - Basic System. Jul 2 00:48:23.518304 systemd[1457]: Reached target default.target - Main User Target. Jul 2 00:48:23.518326 systemd[1457]: Startup finished in 69ms. Jul 2 00:48:23.518433 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 00:48:23.526548 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 00:48:23.589868 systemd[1]: Started sshd@1-10.0.0.149:22-10.0.0.1:51630.service - OpenSSH per-connection server daemon (10.0.0.1:51630). Jul 2 00:48:23.621093 sshd[1466]: Accepted publickey for core from 10.0.0.1 port 51630 ssh2: RSA SHA256:Kp5j1k40JRjOvdBxgA97BAtl42hDR/cCrPEvDSx0HPk Jul 2 00:48:23.622687 sshd[1466]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:48:23.627328 systemd-logind[1334]: New session 2 of user core. Jul 2 00:48:23.645584 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 00:48:23.700817 sshd[1466]: pam_unix(sshd:session): session closed for user core Jul 2 00:48:23.715647 systemd[1]: Started sshd@2-10.0.0.149:22-10.0.0.1:51632.service - OpenSSH per-connection server daemon (10.0.0.1:51632). Jul 2 00:48:23.716190 systemd[1]: sshd@1-10.0.0.149:22-10.0.0.1:51630.service: Deactivated successfully. Jul 2 00:48:23.717402 systemd-logind[1334]: Session 2 logged out. Waiting for processes to exit. Jul 2 00:48:23.717487 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 00:48:23.718370 systemd-logind[1334]: Removed session 2. Jul 2 00:48:23.749833 sshd[1471]: Accepted publickey for core from 10.0.0.1 port 51632 ssh2: RSA SHA256:Kp5j1k40JRjOvdBxgA97BAtl42hDR/cCrPEvDSx0HPk Jul 2 00:48:23.751131 sshd[1471]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:48:23.757790 systemd-logind[1334]: New session 3 of user core. Jul 2 00:48:23.771654 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 00:48:23.823876 sshd[1471]: pam_unix(sshd:session): session closed for user core Jul 2 00:48:23.836667 systemd[1]: Started sshd@3-10.0.0.149:22-10.0.0.1:51636.service - OpenSSH per-connection server daemon (10.0.0.1:51636). Jul 2 00:48:23.837203 systemd[1]: sshd@2-10.0.0.149:22-10.0.0.1:51632.service: Deactivated successfully. Jul 2 00:48:23.838090 systemd-logind[1334]: Session 3 logged out. Waiting for processes to exit. Jul 2 00:48:23.838172 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 00:48:23.838916 systemd-logind[1334]: Removed session 3. Jul 2 00:48:23.867101 sshd[1478]: Accepted publickey for core from 10.0.0.1 port 51636 ssh2: RSA SHA256:Kp5j1k40JRjOvdBxgA97BAtl42hDR/cCrPEvDSx0HPk Jul 2 00:48:23.868331 sshd[1478]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:48:23.871769 systemd-logind[1334]: New session 4 of user core. Jul 2 00:48:23.883508 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 00:48:23.936356 sshd[1478]: pam_unix(sshd:session): session closed for user core Jul 2 00:48:23.944557 systemd[1]: Started sshd@4-10.0.0.149:22-10.0.0.1:51644.service - OpenSSH per-connection server daemon (10.0.0.1:51644). Jul 2 00:48:23.945041 systemd[1]: sshd@3-10.0.0.149:22-10.0.0.1:51636.service: Deactivated successfully. Jul 2 00:48:23.945851 systemd-logind[1334]: Session 4 logged out. Waiting for processes to exit. Jul 2 00:48:23.945922 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 00:48:23.946986 systemd-logind[1334]: Removed session 4. Jul 2 00:48:23.975486 sshd[1485]: Accepted publickey for core from 10.0.0.1 port 51644 ssh2: RSA SHA256:Kp5j1k40JRjOvdBxgA97BAtl42hDR/cCrPEvDSx0HPk Jul 2 00:48:23.976912 sshd[1485]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:48:23.980074 systemd-logind[1334]: New session 5 of user core. Jul 2 00:48:23.996542 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 00:48:24.058380 sudo[1491]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 00:48:24.058995 sudo[1491]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:48:24.074498 sudo[1491]: pam_unix(sudo:session): session closed for user root Jul 2 00:48:24.076656 sshd[1485]: pam_unix(sshd:session): session closed for user core Jul 2 00:48:24.085725 systemd[1]: Started sshd@5-10.0.0.149:22-10.0.0.1:51650.service - OpenSSH per-connection server daemon (10.0.0.1:51650). Jul 2 00:48:24.086230 systemd[1]: sshd@4-10.0.0.149:22-10.0.0.1:51644.service: Deactivated successfully. Jul 2 00:48:24.087205 systemd-logind[1334]: Session 5 logged out. Waiting for processes to exit. Jul 2 00:48:24.087298 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 00:48:24.088103 systemd-logind[1334]: Removed session 5. Jul 2 00:48:24.117088 sshd[1493]: Accepted publickey for core from 10.0.0.1 port 51650 ssh2: RSA SHA256:Kp5j1k40JRjOvdBxgA97BAtl42hDR/cCrPEvDSx0HPk Jul 2 00:48:24.118303 sshd[1493]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:48:24.122349 systemd-logind[1334]: New session 6 of user core. Jul 2 00:48:24.128515 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 00:48:24.181737 sudo[1500]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 00:48:24.181980 sudo[1500]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:48:24.184975 sudo[1500]: pam_unix(sudo:session): session closed for user root Jul 2 00:48:24.189122 sudo[1499]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 00:48:24.189384 sudo[1499]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:48:24.205556 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 00:48:24.205000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jul 2 00:48:24.206974 auditctl[1503]: No rules Jul 2 00:48:24.207406 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 00:48:24.207616 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 00:48:24.208444 kernel: kauditd_printk_skb: 62 callbacks suppressed Jul 2 00:48:24.208493 kernel: audit: type=1305 audit(1719881304.205:151): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jul 2 00:48:24.208520 kernel: audit: type=1300 audit(1719881304.205:151): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc9572f60 a2=420 a3=0 items=0 ppid=1 pid=1503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:24.205000 audit[1503]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc9572f60 a2=420 a3=0 items=0 ppid=1 pid=1503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:24.209249 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:48:24.211163 kernel: audit: type=1327 audit(1719881304.205:151): proctitle=2F7362696E2F617564697463746C002D44 Jul 2 00:48:24.205000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jul 2 00:48:24.211997 kernel: audit: type=1131 audit(1719881304.206:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:24.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:24.227601 augenrules[1521]: No rules Jul 2 00:48:24.228453 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:48:24.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:24.229560 sudo[1499]: pam_unix(sudo:session): session closed for user root Jul 2 00:48:24.228000 audit[1499]: USER_END pid=1499 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 00:48:24.231101 sshd[1493]: pam_unix(sshd:session): session closed for user core Jul 2 00:48:24.233217 kernel: audit: type=1130 audit(1719881304.227:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:24.233289 kernel: audit: type=1106 audit(1719881304.228:154): pid=1499 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 00:48:24.233312 kernel: audit: type=1104 audit(1719881304.228:155): pid=1499 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 00:48:24.228000 audit[1499]: CRED_DISP pid=1499 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 00:48:24.230000 audit[1493]: USER_END pid=1493 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:48:24.237727 kernel: audit: type=1106 audit(1719881304.230:156): pid=1493 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:48:24.237760 kernel: audit: type=1104 audit(1719881304.230:157): pid=1493 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:48:24.230000 audit[1493]: CRED_DISP pid=1493 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:48:24.241610 systemd[1]: Started sshd@6-10.0.0.149:22-10.0.0.1:51652.service - OpenSSH per-connection server daemon (10.0.0.1:51652). Jul 2 00:48:24.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.149:22-10.0.0.1:51652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:24.242115 systemd[1]: sshd@5-10.0.0.149:22-10.0.0.1:51650.service: Deactivated successfully. Jul 2 00:48:24.243013 systemd-logind[1334]: Session 6 logged out. Waiting for processes to exit. Jul 2 00:48:24.243080 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 00:48:24.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.149:22-10.0.0.1:51650 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:24.243826 systemd-logind[1334]: Removed session 6. Jul 2 00:48:24.244265 kernel: audit: type=1130 audit(1719881304.240:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.149:22-10.0.0.1:51652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:24.271000 audit[1526]: USER_ACCT pid=1526 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:48:24.272688 sshd[1526]: Accepted publickey for core from 10.0.0.1 port 51652 ssh2: RSA SHA256:Kp5j1k40JRjOvdBxgA97BAtl42hDR/cCrPEvDSx0HPk Jul 2 00:48:24.272000 audit[1526]: CRED_ACQ pid=1526 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:48:24.272000 audit[1526]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffff0f07e0 a2=3 a3=1 items=0 ppid=1 pid=1526 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:24.272000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 00:48:24.273999 sshd[1526]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:48:24.277024 systemd-logind[1334]: New session 7 of user core. Jul 2 00:48:24.290631 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 00:48:24.292000 audit[1526]: USER_START pid=1526 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:48:24.293000 audit[1532]: CRED_ACQ pid=1532 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:48:24.341000 audit[1533]: USER_ACCT pid=1533 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 00:48:24.342339 sudo[1533]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 00:48:24.342000 audit[1533]: CRED_REFR pid=1533 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 00:48:24.342916 sudo[1533]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:48:24.344000 audit[1533]: USER_START pid=1533 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 00:48:24.460618 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 00:48:24.714653 dockerd[1543]: time="2024-07-02T00:48:24.714520980Z" level=info msg="Starting up" Jul 2 00:48:24.952470 dockerd[1543]: time="2024-07-02T00:48:24.952428417Z" level=info msg="Loading containers: start." Jul 2 00:48:25.001000 audit[1578]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1578 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:48:25.001000 audit[1578]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffc87e65d0 a2=0 a3=1 items=0 ppid=1543 pid=1578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:25.001000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jul 2 00:48:25.003000 audit[1580]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1580 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:48:25.003000 audit[1580]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffea0e3b40 a2=0 a3=1 items=0 ppid=1543 pid=1580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:25.003000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jul 2 00:48:25.006000 audit[1582]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1582 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:48:25.006000 audit[1582]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff58e70d0 a2=0 a3=1 items=0 ppid=1543 pid=1582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:25.006000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jul 2 00:48:25.008000 audit[1584]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1584 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:48:25.008000 audit[1584]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc127d740 a2=0 a3=1 items=0 ppid=1543 pid=1584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:25.008000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jul 2 00:48:25.011000 audit[1586]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1586 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:48:25.011000 audit[1586]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff49414e0 a2=0 a3=1 items=0 ppid=1543 pid=1586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:25.011000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jul 2 00:48:25.013000 audit[1588]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1588 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:48:25.013000 audit[1588]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffeb9c9020 a2=0 a3=1 items=0 ppid=1543 pid=1588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:25.013000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jul 2 00:48:25.026000 audit[1590]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1590 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:48:25.026000 audit[1590]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff1ea6710 a2=0 a3=1 items=0 ppid=1543 pid=1590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:25.026000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jul 2 00:48:25.028000 audit[1592]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1592 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:48:25.028000 audit[1592]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffe16e9270 a2=0 a3=1 items=0 ppid=1543 pid=1592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:25.028000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jul 2 00:48:25.030000 audit[1594]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1594 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:48:25.030000 audit[1594]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=fffff165ec10 a2=0 a3=1 items=0 ppid=1543 pid=1594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:25.030000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 2 00:48:25.038000 audit[1598]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1598 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:48:25.038000 audit[1598]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffe1bc3ba0 a2=0 a3=1 items=0 ppid=1543 pid=1598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:25.038000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jul 2 00:48:25.039000 audit[1599]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1599 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:48:25.039000 audit[1599]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffffd0a6a10 a2=0 a3=1 items=0 ppid=1543 pid=1599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:25.039000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 2 00:48:25.047271 kernel: Initializing XFRM netlink socket Jul 2 00:48:25.075000 audit[1607]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1607 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:48:25.075000 audit[1607]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=ffffd4f8c660 a2=0 a3=1 items=0 ppid=1543 pid=1607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:25.075000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jul 2 00:48:25.088000 audit[1610]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1610 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:48:25.088000 audit[1610]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=fffffa6ac660 a2=0 a3=1 items=0 ppid=1543 pid=1610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:25.088000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jul 2 00:48:25.091000 audit[1614]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1614 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:48:25.091000 audit[1614]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffdf4de730 a2=0 a3=1 items=0 ppid=1543 pid=1614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:25.091000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jul 2 00:48:25.094000 audit[1616]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1616 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:48:25.094000 audit[1616]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffd7239da0 a2=0 a3=1 items=0 ppid=1543 pid=1616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:25.094000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jul 2 00:48:25.096000 audit[1618]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1618 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:48:25.096000 audit[1618]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=fffff3b4d110 a2=0 a3=1 items=0 ppid=1543 pid=1618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:25.096000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jul 2 00:48:25.099000 audit[1620]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1620 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:48:25.099000 audit[1620]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=fffff49d7050 a2=0 a3=1 items=0 ppid=1543 pid=1620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:25.099000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jul 2 00:48:25.101000 audit[1622]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1622 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:48:25.101000 audit[1622]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=ffffd3111250 a2=0 a3=1 items=0 ppid=1543 pid=1622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:25.101000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jul 2 00:48:25.111000 audit[1625]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1625 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:48:25.111000 audit[1625]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=ffffe3ffe430 a2=0 a3=1 items=0 ppid=1543 pid=1625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:25.111000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jul 2 00:48:25.113000 audit[1627]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1627 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:48:25.113000 audit[1627]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=ffffc175cbe0 a2=0 a3=1 items=0 ppid=1543 pid=1627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:25.113000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jul 2 00:48:25.115000 audit[1629]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1629 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:48:25.115000 audit[1629]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffc3016390 a2=0 a3=1 items=0 ppid=1543 pid=1629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:25.115000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jul 2 00:48:25.117000 audit[1631]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1631 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:48:25.117000 audit[1631]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffffe30a030 a2=0 a3=1 items=0 ppid=1543 pid=1631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:25.117000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jul 2 00:48:25.118738 systemd-networkd[1144]: docker0: Link UP Jul 2 00:48:25.145000 audit[1635]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1635 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:48:25.145000 audit[1635]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc24aa5e0 a2=0 a3=1 items=0 ppid=1543 pid=1635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:25.145000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jul 2 00:48:25.146000 audit[1636]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1636 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:48:25.146000 audit[1636]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffd7811630 a2=0 a3=1 items=0 ppid=1543 pid=1636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:25.146000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 2 00:48:25.148433 dockerd[1543]: time="2024-07-02T00:48:25.148394154Z" level=info msg="Loading containers: done." Jul 2 00:48:25.215915 dockerd[1543]: time="2024-07-02T00:48:25.215853985Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 00:48:25.216087 dockerd[1543]: time="2024-07-02T00:48:25.216075822Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 00:48:25.216229 dockerd[1543]: time="2024-07-02T00:48:25.216188679Z" level=info msg="Daemon has completed initialization" Jul 2 00:48:25.244895 dockerd[1543]: time="2024-07-02T00:48:25.244767961Z" level=info msg="API listen on /run/docker.sock" Jul 2 00:48:25.244997 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 00:48:25.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:25.822085 containerd[1351]: time="2024-07-02T00:48:25.822013712Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jul 2 00:48:26.499592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount522674894.mount: Deactivated successfully. Jul 2 00:48:28.870218 containerd[1351]: time="2024-07-02T00:48:28.870163092Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:48:28.870977 containerd[1351]: time="2024-07-02T00:48:28.870936626Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=31671540" Jul 2 00:48:28.871920 containerd[1351]: time="2024-07-02T00:48:28.871885989Z" level=info msg="ImageCreate event name:\"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:48:28.874005 containerd[1351]: time="2024-07-02T00:48:28.873970288Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:48:28.876570 containerd[1351]: time="2024-07-02T00:48:28.876527396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:48:28.877703 containerd[1351]: time="2024-07-02T00:48:28.877661767Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"31668338\" in 3.055585919s" Jul 2 00:48:28.877763 containerd[1351]: time="2024-07-02T00:48:28.877704567Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\"" Jul 2 00:48:28.896938 containerd[1351]: time="2024-07-02T00:48:28.896896483Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jul 2 00:48:29.116652 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 00:48:29.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:29.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:29.116815 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:48:29.123572 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:48:29.219916 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:48:29.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:29.220775 kernel: kauditd_printk_skb: 86 callbacks suppressed Jul 2 00:48:29.220858 kernel: audit: type=1130 audit(1719881309.218:195): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:29.345645 kubelet[1753]: E0702 00:48:29.345583 1753 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:48:29.348679 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:48:29.348829 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:48:29.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 2 00:48:29.351266 kernel: audit: type=1131 audit(1719881309.347:196): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 2 00:48:32.270180 containerd[1351]: time="2024-07-02T00:48:32.270134140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:48:32.271293 containerd[1351]: time="2024-07-02T00:48:32.270646387Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=28893120" Jul 2 00:48:32.271834 containerd[1351]: time="2024-07-02T00:48:32.271807205Z" level=info msg="ImageCreate event name:\"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:48:32.273982 containerd[1351]: time="2024-07-02T00:48:32.273953849Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:48:32.276074 containerd[1351]: time="2024-07-02T00:48:32.276041868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:48:32.278230 containerd[1351]: time="2024-07-02T00:48:32.278191724Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"30445463\" in 3.381108047s" Jul 2 00:48:32.278364 containerd[1351]: time="2024-07-02T00:48:32.278341660Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\"" Jul 2 00:48:32.296832 containerd[1351]: time="2024-07-02T00:48:32.296786857Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jul 2 00:48:33.426480 containerd[1351]: time="2024-07-02T00:48:33.426425448Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:48:33.427227 containerd[1351]: time="2024-07-02T00:48:33.427196159Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=15358440" Jul 2 00:48:33.428408 containerd[1351]: time="2024-07-02T00:48:33.428383551Z" level=info msg="ImageCreate event name:\"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:48:33.430761 containerd[1351]: time="2024-07-02T00:48:33.430720889Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:48:33.433078 containerd[1351]: time="2024-07-02T00:48:33.433047672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:48:33.434186 containerd[1351]: time="2024-07-02T00:48:33.434148894Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"16910801\" in 1.137309922s" Jul 2 00:48:33.434314 containerd[1351]: time="2024-07-02T00:48:33.434293580Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\"" Jul 2 00:48:33.453108 containerd[1351]: time="2024-07-02T00:48:33.453068099Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jul 2 00:48:34.412977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount437311767.mount: Deactivated successfully. Jul 2 00:48:34.764701 containerd[1351]: time="2024-07-02T00:48:34.764551850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:48:34.767514 containerd[1351]: time="2024-07-02T00:48:34.767417123Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=24772463" Jul 2 00:48:34.769664 containerd[1351]: time="2024-07-02T00:48:34.769620286Z" level=info msg="ImageCreate event name:\"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:48:34.771903 containerd[1351]: time="2024-07-02T00:48:34.771860559Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:48:34.774843 containerd[1351]: time="2024-07-02T00:48:34.774794594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:48:34.775990 containerd[1351]: time="2024-07-02T00:48:34.775582032Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"24771480\" in 1.322468183s" Jul 2 00:48:34.775990 containerd[1351]: time="2024-07-02T00:48:34.775618980Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\"" Jul 2 00:48:34.797944 containerd[1351]: time="2024-07-02T00:48:34.797907053Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 00:48:35.213206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1801616808.mount: Deactivated successfully. Jul 2 00:48:35.219615 containerd[1351]: time="2024-07-02T00:48:35.219572679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:48:35.223203 containerd[1351]: time="2024-07-02T00:48:35.223163650Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jul 2 00:48:35.224414 containerd[1351]: time="2024-07-02T00:48:35.224336752Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:48:35.226092 containerd[1351]: time="2024-07-02T00:48:35.226060913Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:48:35.228367 containerd[1351]: time="2024-07-02T00:48:35.228328114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:48:35.229339 containerd[1351]: time="2024-07-02T00:48:35.229304749Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 431.201122ms" Jul 2 00:48:35.229469 containerd[1351]: time="2024-07-02T00:48:35.229448239Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jul 2 00:48:35.249902 containerd[1351]: time="2024-07-02T00:48:35.249858095Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 00:48:35.848062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount236367461.mount: Deactivated successfully. Jul 2 00:48:38.224598 containerd[1351]: time="2024-07-02T00:48:38.224532243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:48:38.225224 containerd[1351]: time="2024-07-02T00:48:38.225192263Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Jul 2 00:48:38.226174 containerd[1351]: time="2024-07-02T00:48:38.226138697Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:48:38.229470 containerd[1351]: time="2024-07-02T00:48:38.228311570Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:48:38.230751 containerd[1351]: time="2024-07-02T00:48:38.230713639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:48:38.232169 containerd[1351]: time="2024-07-02T00:48:38.232111213Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.982158201s" Jul 2 00:48:38.232236 containerd[1351]: time="2024-07-02T00:48:38.232163864Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jul 2 00:48:38.251289 containerd[1351]: time="2024-07-02T00:48:38.251223824Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jul 2 00:48:38.835475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1569557722.mount: Deactivated successfully. Jul 2 00:48:39.259938 containerd[1351]: time="2024-07-02T00:48:39.259837819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:48:39.260844 containerd[1351]: time="2024-07-02T00:48:39.260786453Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=14558464" Jul 2 00:48:39.261426 containerd[1351]: time="2024-07-02T00:48:39.261397938Z" level=info msg="ImageCreate event name:\"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:48:39.262906 containerd[1351]: time="2024-07-02T00:48:39.262872927Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:48:39.265524 containerd[1351]: time="2024-07-02T00:48:39.265487600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:48:39.266332 containerd[1351]: time="2024-07-02T00:48:39.266295821Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"14557471\" in 1.015007408s" Jul 2 00:48:39.266401 containerd[1351]: time="2024-07-02T00:48:39.266332317Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Jul 2 00:48:39.477866 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 00:48:39.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:39.478073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:48:39.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:39.481746 kernel: audit: type=1130 audit(1719881319.477:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:39.481811 kernel: audit: type=1131 audit(1719881319.477:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:39.491587 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:48:39.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:39.578179 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:48:39.581296 kernel: audit: type=1130 audit(1719881319.577:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:39.637232 kubelet[1901]: E0702 00:48:39.637167 1901 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:48:39.640080 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:48:39.640225 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:48:39.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 2 00:48:39.644275 kernel: audit: type=1131 audit(1719881319.639:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 2 00:48:43.924807 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:48:43.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:43.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:43.928628 kernel: audit: type=1130 audit(1719881323.923:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:43.928673 kernel: audit: type=1131 audit(1719881323.923:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:43.941642 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:48:43.955039 systemd[1]: Reloading. Jul 2 00:48:44.165164 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:48:44.246418 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 00:48:44.246485 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 00:48:44.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 2 00:48:44.246728 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:48:44.248449 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:48:44.249274 kernel: audit: type=1130 audit(1719881324.245:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 2 00:48:44.342018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:48:44.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:44.346065 kernel: audit: type=1130 audit(1719881324.341:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:44.391303 kubelet[2034]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:48:44.391303 kubelet[2034]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:48:44.391303 kubelet[2034]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:48:44.391663 kubelet[2034]: I0702 00:48:44.391358 2034 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:48:45.010089 kubelet[2034]: I0702 00:48:45.010050 2034 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 00:48:45.010089 kubelet[2034]: I0702 00:48:45.010076 2034 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:48:45.010355 kubelet[2034]: I0702 00:48:45.010343 2034 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 00:48:45.062873 kubelet[2034]: I0702 00:48:45.062843 2034 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:48:45.070585 kubelet[2034]: E0702 00:48:45.070562 2034 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.149:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.149:6443: connect: connection refused Jul 2 00:48:45.083594 kubelet[2034]: W0702 00:48:45.083570 2034 machine.go:65] Cannot read vendor id correctly, set empty. Jul 2 00:48:45.084610 kubelet[2034]: I0702 00:48:45.084591 2034 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:48:45.085117 kubelet[2034]: I0702 00:48:45.085100 2034 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:48:45.085429 kubelet[2034]: I0702 00:48:45.085407 2034 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:48:45.085587 kubelet[2034]: I0702 00:48:45.085573 2034 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:48:45.085652 kubelet[2034]: I0702 00:48:45.085643 2034 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:48:45.085882 kubelet[2034]: I0702 00:48:45.085868 2034 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:48:45.089063 kubelet[2034]: I0702 00:48:45.089035 2034 kubelet.go:393] "Attempting to sync node with API server" Jul 2 00:48:45.089185 kubelet[2034]: I0702 00:48:45.089170 2034 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:48:45.089357 kubelet[2034]: I0702 00:48:45.089345 2034 kubelet.go:309] "Adding apiserver pod source" Jul 2 00:48:45.089422 kubelet[2034]: I0702 00:48:45.089412 2034 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:48:45.090493 kubelet[2034]: W0702 00:48:45.090446 2034 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jul 2 00:48:45.090593 kubelet[2034]: E0702 00:48:45.090582 2034 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jul 2 00:48:45.091857 kubelet[2034]: W0702 00:48:45.091816 2034 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.149:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jul 2 00:48:45.091914 kubelet[2034]: E0702 00:48:45.091866 2034 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.149:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jul 2 00:48:45.092616 kubelet[2034]: I0702 00:48:45.092591 2034 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jul 2 00:48:45.096052 kubelet[2034]: W0702 00:48:45.096019 2034 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 00:48:45.096764 kubelet[2034]: I0702 00:48:45.096737 2034 server.go:1232] "Started kubelet" Jul 2 00:48:45.096840 kubelet[2034]: I0702 00:48:45.096820 2034 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:48:45.097088 kubelet[2034]: I0702 00:48:45.097058 2034 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 00:48:45.097400 kubelet[2034]: I0702 00:48:45.097382 2034 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:48:45.097671 kubelet[2034]: I0702 00:48:45.097634 2034 server.go:462] "Adding debug handlers to kubelet server" Jul 2 00:48:45.099737 kubelet[2034]: E0702 00:48:45.098966 2034 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 00:48:45.099737 kubelet[2034]: E0702 00:48:45.098990 2034 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:48:45.100676 kubelet[2034]: I0702 00:48:45.100647 2034 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:48:45.102004 kubelet[2034]: E0702 00:48:45.101987 2034 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:48:45.102126 kubelet[2034]: I0702 00:48:45.102113 2034 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:48:45.102297 kubelet[2034]: I0702 00:48:45.102280 2034 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:48:45.102422 kubelet[2034]: I0702 00:48:45.102411 2034 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:48:45.102759 kubelet[2034]: W0702 00:48:45.102724 2034 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jul 2 00:48:45.102863 kubelet[2034]: E0702 00:48:45.102851 2034 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jul 2 00:48:45.103302 kubelet[2034]: E0702 00:48:45.103281 2034 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.149:6443: connect: connection refused" interval="200ms" Jul 2 00:48:45.103000 audit[2046]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2046 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:48:45.104464 kubelet[2034]: E0702 00:48:45.100028 2034 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17de3ef3ee379c24", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 48, 45, 96705060, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 48, 45, 96705060, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.149:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.149:6443: connect: connection refused'(may retry after sleeping) Jul 2 00:48:45.103000 audit[2046]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff4d94980 a2=0 a3=1 items=0 ppid=2034 pid=2046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:45.108348 kernel: audit: type=1325 audit(1719881325.103:205): table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2046 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:48:45.108414 kernel: audit: type=1300 audit(1719881325.103:205): arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff4d94980 a2=0 a3=1 items=0 ppid=2034 pid=2046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:45.108437 kernel: audit: type=1327 audit(1719881325.103:205): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 2 00:48:45.103000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 2 00:48:45.104000 audit[2047]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=2047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:48:45.110094 kernel: audit: type=1325 audit(1719881325.104:206): table=filter:27 family=2 entries=1 op=nft_register_chain pid=2047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:48:45.110159 kernel: audit: type=1300 audit(1719881325.104:206): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe6f1e190 a2=0 a3=1 items=0 ppid=2034 pid=2047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:45.104000 audit[2047]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe6f1e190 a2=0 a3=1 items=0 ppid=2034 pid=2047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:45.104000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jul 2 00:48:45.114745 kernel: audit: type=1327 audit(1719881325.104:206): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jul 2 00:48:45.110000 audit[2049]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=2049 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:48:45.116200 kernel: audit: type=1325 audit(1719881325.110:207): table=filter:28 family=2 entries=2 op=nft_register_chain pid=2049 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:48:45.116275 kernel: audit: type=1300 audit(1719881325.110:207): arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffdc90f7a0 a2=0 a3=1 items=0 ppid=2034 pid=2049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:45.110000 audit[2049]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffdc90f7a0 a2=0 a3=1 items=0 ppid=2034 pid=2049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:45.110000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 2 00:48:45.120549 kernel: audit: type=1327 audit(1719881325.110:207): proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 2 00:48:45.120615 kernel: audit: type=1325 audit(1719881325.113:208): table=filter:29 family=2 entries=2 op=nft_register_chain pid=2052 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:48:45.113000 audit[2052]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=2052 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:48:45.113000 audit[2052]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe48a05f0 a2=0 a3=1 items=0 ppid=2034 pid=2052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:45.113000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 2 00:48:45.127000 audit[2058]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:48:45.127000 audit[2058]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=fffffdab3610 a2=0 a3=1 items=0 ppid=2034 pid=2058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:45.127000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jul 2 00:48:45.128137 kubelet[2034]: I0702 00:48:45.128108 2034 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:48:45.128000 audit[2059]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=2059 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 00:48:45.128000 audit[2059]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd5734120 a2=0 a3=1 items=0 ppid=2034 pid=2059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:45.128000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 2 00:48:45.128000 audit[2060]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=2060 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:48:45.128000 audit[2060]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff697d4d0 a2=0 a3=1 items=0 ppid=2034 pid=2060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:45.128000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jul 2 00:48:45.129463 kubelet[2034]: I0702 00:48:45.129438 2034 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:48:45.129511 kubelet[2034]: I0702 00:48:45.129468 2034 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:48:45.129511 kubelet[2034]: I0702 00:48:45.129488 2034 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 00:48:45.129556 kubelet[2034]: E0702 00:48:45.129540 2034 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:48:45.130394 kubelet[2034]: W0702 00:48:45.130365 2034 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jul 2 00:48:45.130456 kubelet[2034]: E0702 00:48:45.130403 2034 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jul 2 00:48:45.129000 audit[2061]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=2061 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 00:48:45.129000 audit[2061]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe965b110 a2=0 a3=1 items=0 ppid=2034 pid=2061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:45.129000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jul 2 00:48:45.130000 audit[2062]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=2062 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:48:45.130000 audit[2062]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe0a00600 a2=0 a3=1 items=0 ppid=2034 pid=2062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:45.130000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jul 2 00:48:45.131000 audit[2063]: NETFILTER_CFG table=nat:35 family=10 entries=2 op=nft_register_chain pid=2063 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 00:48:45.131000 audit[2063]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=ffffe3399a50 a2=0 a3=1 items=0 ppid=2034 pid=2063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:45.131000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jul 2 00:48:45.131000 audit[2064]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_chain pid=2064 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:48:45.131000 audit[2064]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcbad3490 a2=0 a3=1 items=0 ppid=2034 pid=2064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:45.131000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jul 2 00:48:45.132000 audit[2065]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=2065 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 00:48:45.132000 audit[2065]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd9318c60 a2=0 a3=1 items=0 ppid=2034 pid=2065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:48:45.132000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jul 2 00:48:45.148553 kubelet[2034]: I0702 00:48:45.148526 2034 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:48:45.148553 kubelet[2034]: I0702 00:48:45.148548 2034 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:48:45.148678 kubelet[2034]: I0702 00:48:45.148579 2034 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:48:45.203769 kubelet[2034]: I0702 00:48:45.203739 2034 policy_none.go:49] "None policy: Start" Jul 2 00:48:45.204043 kubelet[2034]: I0702 00:48:45.203792 2034 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:48:45.204614 kubelet[2034]: I0702 00:48:45.204594 2034 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 00:48:45.204672 kubelet[2034]: I0702 00:48:45.204621 2034 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:48:45.204973 kubelet[2034]: E0702 00:48:45.204957 2034 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.149:6443/api/v1/nodes\": dial tcp 10.0.0.149:6443: connect: connection refused" node="localhost" Jul 2 00:48:45.208568 kubelet[2034]: I0702 00:48:45.208543 2034 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:48:45.208832 kubelet[2034]: I0702 00:48:45.208812 2034 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:48:45.209442 kubelet[2034]: E0702 00:48:45.209419 2034 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 2 00:48:45.230020 kubelet[2034]: I0702 00:48:45.229975 2034 topology_manager.go:215] "Topology Admit Handler" podUID="0427f0352968d43b8b5a35dfc8896680" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 00:48:45.231304 kubelet[2034]: I0702 00:48:45.231283 2034 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 00:48:45.232333 kubelet[2034]: I0702 00:48:45.232310 2034 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 00:48:45.303734 kubelet[2034]: I0702 00:48:45.303069 2034 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:48:45.303734 kubelet[2034]: I0702 00:48:45.303112 2034 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:48:45.303734 kubelet[2034]: I0702 00:48:45.303136 2034 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jul 2 00:48:45.303734 kubelet[2034]: I0702 00:48:45.303155 2034 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0427f0352968d43b8b5a35dfc8896680-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0427f0352968d43b8b5a35dfc8896680\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:48:45.303734 kubelet[2034]: I0702 00:48:45.303176 2034 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:48:45.304475 kubelet[2034]: I0702 00:48:45.303195 2034 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:48:45.304475 kubelet[2034]: I0702 00:48:45.303256 2034 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0427f0352968d43b8b5a35dfc8896680-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0427f0352968d43b8b5a35dfc8896680\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:48:45.304475 kubelet[2034]: I0702 00:48:45.303299 2034 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0427f0352968d43b8b5a35dfc8896680-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0427f0352968d43b8b5a35dfc8896680\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:48:45.304475 kubelet[2034]: I0702 00:48:45.303324 2034 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:48:45.305309 kubelet[2034]: E0702 00:48:45.305288 2034 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.149:6443: connect: connection refused" interval="400ms" Jul 2 00:48:45.406426 kubelet[2034]: I0702 00:48:45.406389 2034 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:48:45.406744 kubelet[2034]: E0702 00:48:45.406682 2034 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.149:6443/api/v1/nodes\": dial tcp 10.0.0.149:6443: connect: connection refused" node="localhost" Jul 2 00:48:45.536263 kubelet[2034]: E0702 00:48:45.536219 2034 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:48:45.536480 kubelet[2034]: E0702 00:48:45.536450 2034 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:48:45.536556 kubelet[2034]: E0702 00:48:45.536530 2034 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:48:45.537118 containerd[1351]: time="2024-07-02T00:48:45.537063494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0427f0352968d43b8b5a35dfc8896680,Namespace:kube-system,Attempt:0,}" Jul 2 00:48:45.537366 containerd[1351]: time="2024-07-02T00:48:45.537143588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,}" Jul 2 00:48:45.537366 containerd[1351]: time="2024-07-02T00:48:45.537100359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,}" Jul 2 00:48:45.705910 kubelet[2034]: E0702 00:48:45.705808 2034 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.149:6443: connect: connection refused" interval="800ms" Jul 2 00:48:45.808315 kubelet[2034]: I0702 00:48:45.808289 2034 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:48:45.808648 kubelet[2034]: E0702 00:48:45.808610 2034 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.149:6443/api/v1/nodes\": dial tcp 10.0.0.149:6443: connect: connection refused" node="localhost" Jul 2 00:48:45.950569 kubelet[2034]: W0702 00:48:45.950514 2034 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jul 2 00:48:45.950569 kubelet[2034]: E0702 00:48:45.950562 2034 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jul 2 00:48:45.974128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount457250480.mount: Deactivated successfully. Jul 2 00:48:45.978777 containerd[1351]: time="2024-07-02T00:48:45.978733418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:48:45.980363 containerd[1351]: time="2024-07-02T00:48:45.980329181Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 2 00:48:45.982890 containerd[1351]: time="2024-07-02T00:48:45.982854176Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:48:45.984150 containerd[1351]: time="2024-07-02T00:48:45.984119955Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:48:45.984703 containerd[1351]: time="2024-07-02T00:48:45.984661603Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:48:45.985790 containerd[1351]: time="2024-07-02T00:48:45.985757187Z" level=info msg="ImageUpdate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:48:45.986015 kubelet[2034]: E0702 00:48:45.985916 2034 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17de3ef3ee379c24", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 48, 45, 96705060, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 48, 45, 96705060, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.149:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.149:6443: connect: connection refused'(may retry after sleeping) Jul 2 00:48:45.987900 containerd[1351]: time="2024-07-02T00:48:45.987866499Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:48:45.988138 containerd[1351]: time="2024-07-02T00:48:45.988049463Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:48:45.989283 containerd[1351]: time="2024-07-02T00:48:45.989234228Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:48:45.991648 containerd[1351]: time="2024-07-02T00:48:45.991589307Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:48:45.995097 containerd[1351]: time="2024-07-02T00:48:45.995053859Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 457.787507ms" Jul 2 00:48:45.995477 containerd[1351]: time="2024-07-02T00:48:45.995448887Z" level=info msg="ImageUpdate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:48:45.996509 containerd[1351]: time="2024-07-02T00:48:45.996470421Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 459.286766ms" Jul 2 00:48:45.998333 containerd[1351]: time="2024-07-02T00:48:45.998270043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:48:45.999116 containerd[1351]: time="2024-07-02T00:48:45.999077511Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:48:45.999982 containerd[1351]: time="2024-07-02T00:48:45.999953266Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:48:46.002100 containerd[1351]: time="2024-07-02T00:48:46.001027115Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:48:46.002478 containerd[1351]: time="2024-07-02T00:48:46.002445656Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 465.222095ms" Jul 2 00:48:46.132048 kubelet[2034]: W0702 00:48:46.131981 2034 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jul 2 00:48:46.132048 kubelet[2034]: E0702 00:48:46.132057 2034 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jul 2 00:48:46.204550 containerd[1351]: time="2024-07-02T00:48:46.204468649Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:48:46.204693 containerd[1351]: time="2024-07-02T00:48:46.204541253Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:48:46.204693 containerd[1351]: time="2024-07-02T00:48:46.204599807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:48:46.204693 containerd[1351]: time="2024-07-02T00:48:46.204617418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:48:46.204693 containerd[1351]: time="2024-07-02T00:48:46.204630626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:48:46.204849 containerd[1351]: time="2024-07-02T00:48:46.204508833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:48:46.204849 containerd[1351]: time="2024-07-02T00:48:46.204556902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:48:46.204849 containerd[1351]: time="2024-07-02T00:48:46.204575033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:48:46.204849 containerd[1351]: time="2024-07-02T00:48:46.204588040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:48:46.205013 containerd[1351]: time="2024-07-02T00:48:46.204966545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:48:46.205013 containerd[1351]: time="2024-07-02T00:48:46.204994042Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:48:46.205865 containerd[1351]: time="2024-07-02T00:48:46.205088618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:48:46.252337 containerd[1351]: time="2024-07-02T00:48:46.250558474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0427f0352968d43b8b5a35dfc8896680,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7211e292a6b3dfe6b9b9c887c4a01016b52dbdac5c9259fcfc7b2b6094337b7\"" Jul 2 00:48:46.252431 kubelet[2034]: E0702 00:48:46.251602 2034 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:48:46.253200 containerd[1351]: time="2024-07-02T00:48:46.253167104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"e029174806e72fd15e63cc9b30e3addc5e9406095417a39fe856f00a8b5d7147\"" Jul 2 00:48:46.253908 kubelet[2034]: E0702 00:48:46.253884 2034 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:48:46.255639 containerd[1351]: time="2024-07-02T00:48:46.255351642Z" level=info msg="CreateContainer within sandbox \"d7211e292a6b3dfe6b9b9c887c4a01016b52dbdac5c9259fcfc7b2b6094337b7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 00:48:46.255911 containerd[1351]: time="2024-07-02T00:48:46.255878115Z" level=info msg="CreateContainer within sandbox \"e029174806e72fd15e63cc9b30e3addc5e9406095417a39fe856f00a8b5d7147\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 00:48:46.256823 containerd[1351]: time="2024-07-02T00:48:46.256783493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"f97b33cdc2816f54ea584c0209180a3f4c354c71f6f6a094d11a1f2a926678ab\"" Jul 2 00:48:46.257444 kubelet[2034]: E0702 00:48:46.257324 2034 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:48:46.259158 containerd[1351]: time="2024-07-02T00:48:46.259127085Z" level=info msg="CreateContainer within sandbox \"f97b33cdc2816f54ea584c0209180a3f4c354c71f6f6a094d11a1f2a926678ab\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 00:48:46.271664 containerd[1351]: time="2024-07-02T00:48:46.271623350Z" level=info msg="CreateContainer within sandbox \"d7211e292a6b3dfe6b9b9c887c4a01016b52dbdac5c9259fcfc7b2b6094337b7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0327e423323af46d3edc39c1594a730899fa4b2fa7b1d282cba0a1fad9f2ff0b\"" Jul 2 00:48:46.272261 containerd[1351]: time="2024-07-02T00:48:46.272208137Z" level=info msg="StartContainer for \"0327e423323af46d3edc39c1594a730899fa4b2fa7b1d282cba0a1fad9f2ff0b\"" Jul 2 00:48:46.274320 containerd[1351]: time="2024-07-02T00:48:46.274283170Z" level=info msg="CreateContainer within sandbox \"f97b33cdc2816f54ea584c0209180a3f4c354c71f6f6a094d11a1f2a926678ab\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"beb4fba4cadee3085e3dc6383250c2e27c9cb155cc71899985284a8441dd294a\"" Jul 2 00:48:46.274727 containerd[1351]: time="2024-07-02T00:48:46.274698017Z" level=info msg="StartContainer for \"beb4fba4cadee3085e3dc6383250c2e27c9cb155cc71899985284a8441dd294a\"" Jul 2 00:48:46.275293 containerd[1351]: time="2024-07-02T00:48:46.275237897Z" level=info msg="CreateContainer within sandbox \"e029174806e72fd15e63cc9b30e3addc5e9406095417a39fe856f00a8b5d7147\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2e467ea1b1ccd1fc47f5bfa3778fc183a22c59f7930bd2fcc836a9d27e9d70bb\"" Jul 2 00:48:46.275759 containerd[1351]: time="2024-07-02T00:48:46.275737074Z" level=info msg="StartContainer for \"2e467ea1b1ccd1fc47f5bfa3778fc183a22c59f7930bd2fcc836a9d27e9d70bb\"" Jul 2 00:48:46.329235 containerd[1351]: time="2024-07-02T00:48:46.329193435Z" level=info msg="StartContainer for \"0327e423323af46d3edc39c1594a730899fa4b2fa7b1d282cba0a1fad9f2ff0b\" returns successfully" Jul 2 00:48:46.329516 containerd[1351]: time="2024-07-02T00:48:46.329411565Z" level=info msg="StartContainer for \"2e467ea1b1ccd1fc47f5bfa3778fc183a22c59f7930bd2fcc836a9d27e9d70bb\" returns successfully" Jul 2 00:48:46.334105 containerd[1351]: time="2024-07-02T00:48:46.334068332Z" level=info msg="StartContainer for \"beb4fba4cadee3085e3dc6383250c2e27c9cb155cc71899985284a8441dd294a\" returns successfully" Jul 2 00:48:46.506668 kubelet[2034]: E0702 00:48:46.506524 2034 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.149:6443: connect: connection refused" interval="1.6s" Jul 2 00:48:46.610795 kubelet[2034]: I0702 00:48:46.610765 2034 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:48:47.137034 kubelet[2034]: E0702 00:48:47.137004 2034 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:48:47.139481 kubelet[2034]: E0702 00:48:47.139460 2034 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:48:47.141625 kubelet[2034]: E0702 00:48:47.141597 2034 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:48:48.143482 kubelet[2034]: E0702 00:48:48.143451 2034 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:48:48.677826 kubelet[2034]: E0702 00:48:48.677793 2034 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 2 00:48:48.737970 kubelet[2034]: I0702 00:48:48.737940 2034 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jul 2 00:48:49.092950 kubelet[2034]: I0702 00:48:49.092845 2034 apiserver.go:52] "Watching apiserver" Jul 2 00:48:49.102947 kubelet[2034]: I0702 00:48:49.102904 2034 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:48:51.460038 systemd[1]: Reloading. Jul 2 00:48:51.638795 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:48:51.717472 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:48:51.735374 kernel: kauditd_printk_skb: 26 callbacks suppressed Jul 2 00:48:51.735503 kernel: audit: type=1131 audit(1719881331.731:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:51.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:51.732763 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 00:48:51.733113 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:48:51.743928 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:48:51.850584 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:48:51.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:51.854344 kernel: audit: type=1130 audit(1719881331.850:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:51.913416 kubelet[2384]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:48:51.913416 kubelet[2384]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:48:51.913416 kubelet[2384]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:48:51.913885 kubelet[2384]: I0702 00:48:51.913470 2384 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:48:51.917638 kubelet[2384]: I0702 00:48:51.917598 2384 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 00:48:51.917638 kubelet[2384]: I0702 00:48:51.917626 2384 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:48:51.918007 kubelet[2384]: I0702 00:48:51.917977 2384 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 00:48:51.919795 kubelet[2384]: I0702 00:48:51.919494 2384 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 00:48:51.920676 kubelet[2384]: I0702 00:48:51.920418 2384 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:48:51.927721 kubelet[2384]: W0702 00:48:51.927679 2384 machine.go:65] Cannot read vendor id correctly, set empty. Jul 2 00:48:51.928498 kubelet[2384]: I0702 00:48:51.928469 2384 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:48:51.930288 kubelet[2384]: I0702 00:48:51.928835 2384 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:48:51.930288 kubelet[2384]: I0702 00:48:51.929001 2384 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:48:51.930288 kubelet[2384]: I0702 00:48:51.929025 2384 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:48:51.930288 kubelet[2384]: I0702 00:48:51.929034 2384 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:48:51.930288 kubelet[2384]: I0702 00:48:51.929091 2384 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:48:51.930288 kubelet[2384]: I0702 00:48:51.929168 2384 kubelet.go:393] "Attempting to sync node with API server" Jul 2 00:48:51.930589 kubelet[2384]: I0702 00:48:51.929182 2384 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:48:51.930589 kubelet[2384]: I0702 00:48:51.929205 2384 kubelet.go:309] "Adding apiserver pod source" Jul 2 00:48:51.930589 kubelet[2384]: I0702 00:48:51.929214 2384 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:48:51.931568 kubelet[2384]: I0702 00:48:51.931515 2384 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jul 2 00:48:51.934722 kubelet[2384]: I0702 00:48:51.934701 2384 server.go:1232] "Started kubelet" Jul 2 00:48:51.935620 kubelet[2384]: I0702 00:48:51.935586 2384 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 00:48:51.935897 kubelet[2384]: I0702 00:48:51.935872 2384 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:48:51.935945 kubelet[2384]: I0702 00:48:51.935923 2384 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:48:51.936030 kubelet[2384]: I0702 00:48:51.936014 2384 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:48:51.936393 kubelet[2384]: E0702 00:48:51.936369 2384 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 00:48:51.936393 kubelet[2384]: E0702 00:48:51.936394 2384 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:48:51.936728 kubelet[2384]: I0702 00:48:51.936701 2384 server.go:462] "Adding debug handlers to kubelet server" Jul 2 00:48:51.939959 kubelet[2384]: E0702 00:48:51.939938 2384 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:48:51.939959 kubelet[2384]: I0702 00:48:51.939965 2384 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:48:51.940110 kubelet[2384]: I0702 00:48:51.940094 2384 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:48:51.940286 kubelet[2384]: I0702 00:48:51.940228 2384 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:48:51.972993 kubelet[2384]: I0702 00:48:51.972908 2384 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:48:51.976219 kubelet[2384]: I0702 00:48:51.976081 2384 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:48:51.976219 kubelet[2384]: I0702 00:48:51.976105 2384 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:48:51.976219 kubelet[2384]: I0702 00:48:51.976122 2384 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 00:48:51.976219 kubelet[2384]: E0702 00:48:51.976178 2384 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:48:52.019070 kubelet[2384]: I0702 00:48:52.019042 2384 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:48:52.019070 kubelet[2384]: I0702 00:48:52.019067 2384 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:48:52.019230 kubelet[2384]: I0702 00:48:52.019084 2384 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:48:52.019292 kubelet[2384]: I0702 00:48:52.019237 2384 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 00:48:52.019292 kubelet[2384]: I0702 00:48:52.019285 2384 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 00:48:52.019354 kubelet[2384]: I0702 00:48:52.019294 2384 policy_none.go:49] "None policy: Start" Jul 2 00:48:52.020054 kubelet[2384]: I0702 00:48:52.020035 2384 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 00:48:52.020110 kubelet[2384]: I0702 00:48:52.020065 2384 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:48:52.020299 kubelet[2384]: I0702 00:48:52.020237 2384 state_mem.go:75] "Updated machine memory state" Jul 2 00:48:52.021508 kubelet[2384]: I0702 00:48:52.021371 2384 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:48:52.022141 kubelet[2384]: I0702 00:48:52.022113 2384 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:48:52.044072 kubelet[2384]: I0702 00:48:52.044026 2384 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:48:52.052671 kubelet[2384]: I0702 00:48:52.052631 2384 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Jul 2 00:48:52.052774 kubelet[2384]: I0702 00:48:52.052727 2384 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jul 2 00:48:52.077079 kubelet[2384]: I0702 00:48:52.077033 2384 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 00:48:52.077225 kubelet[2384]: I0702 00:48:52.077145 2384 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 00:48:52.077225 kubelet[2384]: I0702 00:48:52.077179 2384 topology_manager.go:215] "Topology Admit Handler" podUID="0427f0352968d43b8b5a35dfc8896680" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 00:48:52.141705 kubelet[2384]: I0702 00:48:52.141675 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:48:52.141901 kubelet[2384]: I0702 00:48:52.141888 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:48:52.141991 kubelet[2384]: I0702 00:48:52.141980 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:48:52.142077 kubelet[2384]: I0702 00:48:52.142066 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0427f0352968d43b8b5a35dfc8896680-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0427f0352968d43b8b5a35dfc8896680\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:48:52.142158 kubelet[2384]: I0702 00:48:52.142149 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0427f0352968d43b8b5a35dfc8896680-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0427f0352968d43b8b5a35dfc8896680\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:48:52.142237 kubelet[2384]: I0702 00:48:52.142228 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:48:52.142366 kubelet[2384]: I0702 00:48:52.142354 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jul 2 00:48:52.142443 kubelet[2384]: I0702 00:48:52.142433 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0427f0352968d43b8b5a35dfc8896680-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0427f0352968d43b8b5a35dfc8896680\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:48:52.142533 kubelet[2384]: I0702 00:48:52.142522 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:48:52.384591 kubelet[2384]: E0702 00:48:52.384470 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:48:52.385500 kubelet[2384]: E0702 00:48:52.385471 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:48:52.385589 kubelet[2384]: E0702 00:48:52.385574 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:48:52.930337 kubelet[2384]: I0702 00:48:52.930302 2384 apiserver.go:52] "Watching apiserver" Jul 2 00:48:52.940539 kubelet[2384]: I0702 00:48:52.940489 2384 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:48:52.986736 kubelet[2384]: E0702 00:48:52.986705 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:48:52.986932 kubelet[2384]: E0702 00:48:52.986793 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:48:52.987228 kubelet[2384]: E0702 00:48:52.987211 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:48:53.015580 kubelet[2384]: I0702 00:48:53.015547 2384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.015489834 podCreationTimestamp="2024-07-02 00:48:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:48:53.015442899 +0000 UTC m=+1.160443189" watchObservedRunningTime="2024-07-02 00:48:53.015489834 +0000 UTC m=+1.160490004" Jul 2 00:48:53.015727 kubelet[2384]: I0702 00:48:53.015647 2384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.015632001 podCreationTimestamp="2024-07-02 00:48:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:48:53.008340761 +0000 UTC m=+1.153340931" watchObservedRunningTime="2024-07-02 00:48:53.015632001 +0000 UTC m=+1.160632171" Jul 2 00:48:53.032407 kubelet[2384]: I0702 00:48:53.032369 2384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.032331579 podCreationTimestamp="2024-07-02 00:48:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:48:53.02304104 +0000 UTC m=+1.168041210" watchObservedRunningTime="2024-07-02 00:48:53.032331579 +0000 UTC m=+1.177331709" Jul 2 00:48:53.988237 kubelet[2384]: E0702 00:48:53.988211 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:48:53.988829 kubelet[2384]: E0702 00:48:53.988810 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:48:56.565902 kubelet[2384]: E0702 00:48:56.565869 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:48:56.996822 kubelet[2384]: E0702 00:48:56.996720 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:48:57.076953 sudo[1533]: pam_unix(sudo:session): session closed for user root Jul 2 00:48:57.076000 audit[1533]: USER_END pid=1533 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 00:48:57.076000 audit[1533]: CRED_DISP pid=1533 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 00:48:57.081259 kernel: audit: type=1106 audit(1719881337.076:219): pid=1533 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 00:48:57.081312 kernel: audit: type=1104 audit(1719881337.076:220): pid=1533 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 00:48:57.082684 sshd[1526]: pam_unix(sshd:session): session closed for user core Jul 2 00:48:57.083000 audit[1526]: USER_END pid=1526 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:48:57.085437 systemd[1]: sshd@6-10.0.0.149:22-10.0.0.1:51652.service: Deactivated successfully. Jul 2 00:48:57.086216 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 00:48:57.083000 audit[1526]: CRED_DISP pid=1526 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:48:57.089259 kernel: audit: type=1106 audit(1719881337.083:221): pid=1526 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:48:57.089303 kernel: audit: type=1104 audit(1719881337.083:222): pid=1526 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:48:57.089332 kernel: audit: type=1131 audit(1719881337.085:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.149:22-10.0.0.1:51652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:57.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.149:22-10.0.0.1:51652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:48:57.091374 systemd-logind[1334]: Session 7 logged out. Waiting for processes to exit. Jul 2 00:48:57.092129 systemd-logind[1334]: Removed session 7. Jul 2 00:48:58.067070 kubelet[2384]: E0702 00:48:58.066973 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:48:59.000346 kubelet[2384]: E0702 00:48:59.000319 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:49:00.003043 kubelet[2384]: E0702 00:49:00.003010 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:49:02.103934 update_engine[1336]: I0702 00:49:02.103868 1336 update_attempter.cc:509] Updating boot flags... Jul 2 00:49:02.121269 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2477) Jul 2 00:49:02.148291 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2479) Jul 2 00:49:02.171289 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2479) Jul 2 00:49:03.133799 kubelet[2384]: E0702 00:49:03.133763 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:49:07.015348 kubelet[2384]: I0702 00:49:07.015315 2384 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 00:49:07.015744 containerd[1351]: time="2024-07-02T00:49:07.015661191Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 00:49:07.015930 kubelet[2384]: I0702 00:49:07.015830 2384 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 00:49:07.087167 kubelet[2384]: I0702 00:49:07.087109 2384 topology_manager.go:215] "Topology Admit Handler" podUID="aeb1758e-2d69-4404-b7c4-a4ac698b38a8" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-zpp7r" Jul 2 00:49:07.133089 kubelet[2384]: I0702 00:49:07.133051 2384 topology_manager.go:215] "Topology Admit Handler" podUID="51a1a8b7-ad51-4007-b8da-18b53e04a46e" podNamespace="kube-system" podName="kube-proxy-xlrsh" Jul 2 00:49:07.243695 kubelet[2384]: I0702 00:49:07.243645 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qntcs\" (UniqueName: \"kubernetes.io/projected/aeb1758e-2d69-4404-b7c4-a4ac698b38a8-kube-api-access-qntcs\") pod \"tigera-operator-76c4974c85-zpp7r\" (UID: \"aeb1758e-2d69-4404-b7c4-a4ac698b38a8\") " pod="tigera-operator/tigera-operator-76c4974c85-zpp7r" Jul 2 00:49:07.243695 kubelet[2384]: I0702 00:49:07.243700 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51a1a8b7-ad51-4007-b8da-18b53e04a46e-xtables-lock\") pod \"kube-proxy-xlrsh\" (UID: \"51a1a8b7-ad51-4007-b8da-18b53e04a46e\") " pod="kube-system/kube-proxy-xlrsh" Jul 2 00:49:07.243901 kubelet[2384]: I0702 00:49:07.243724 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/aeb1758e-2d69-4404-b7c4-a4ac698b38a8-var-lib-calico\") pod \"tigera-operator-76c4974c85-zpp7r\" (UID: \"aeb1758e-2d69-4404-b7c4-a4ac698b38a8\") " pod="tigera-operator/tigera-operator-76c4974c85-zpp7r" Jul 2 00:49:07.243901 kubelet[2384]: I0702 00:49:07.243744 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/51a1a8b7-ad51-4007-b8da-18b53e04a46e-kube-proxy\") pod \"kube-proxy-xlrsh\" (UID: \"51a1a8b7-ad51-4007-b8da-18b53e04a46e\") " pod="kube-system/kube-proxy-xlrsh" Jul 2 00:49:07.243901 kubelet[2384]: I0702 00:49:07.243774 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51a1a8b7-ad51-4007-b8da-18b53e04a46e-lib-modules\") pod \"kube-proxy-xlrsh\" (UID: \"51a1a8b7-ad51-4007-b8da-18b53e04a46e\") " pod="kube-system/kube-proxy-xlrsh" Jul 2 00:49:07.243901 kubelet[2384]: I0702 00:49:07.243794 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4hfh\" (UniqueName: \"kubernetes.io/projected/51a1a8b7-ad51-4007-b8da-18b53e04a46e-kube-api-access-x4hfh\") pod \"kube-proxy-xlrsh\" (UID: \"51a1a8b7-ad51-4007-b8da-18b53e04a46e\") " pod="kube-system/kube-proxy-xlrsh" Jul 2 00:49:07.390287 containerd[1351]: time="2024-07-02T00:49:07.389984701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-zpp7r,Uid:aeb1758e-2d69-4404-b7c4-a4ac698b38a8,Namespace:tigera-operator,Attempt:0,}" Jul 2 00:49:07.409657 containerd[1351]: time="2024-07-02T00:49:07.409585421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:49:07.409657 containerd[1351]: time="2024-07-02T00:49:07.409628988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:49:07.409657 containerd[1351]: time="2024-07-02T00:49:07.409644591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:49:07.409657 containerd[1351]: time="2024-07-02T00:49:07.409653952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:49:07.436279 kubelet[2384]: E0702 00:49:07.436237 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:49:07.436631 containerd[1351]: time="2024-07-02T00:49:07.436584039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xlrsh,Uid:51a1a8b7-ad51-4007-b8da-18b53e04a46e,Namespace:kube-system,Attempt:0,}" Jul 2 00:49:07.455516 containerd[1351]: time="2024-07-02T00:49:07.455422238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:49:07.455516 containerd[1351]: time="2024-07-02T00:49:07.455483168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:49:07.455516 containerd[1351]: time="2024-07-02T00:49:07.455497650Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:49:07.455516 containerd[1351]: time="2024-07-02T00:49:07.455512773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:49:07.456202 containerd[1351]: time="2024-07-02T00:49:07.456164076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-zpp7r,Uid:aeb1758e-2d69-4404-b7c4-a4ac698b38a8,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"68698f949b2f199b43c1e03154e49711d3e9dd38343c16ecb1dba00c8320c2d2\"" Jul 2 00:49:07.458900 containerd[1351]: time="2024-07-02T00:49:07.458863986Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jul 2 00:49:07.485136 containerd[1351]: time="2024-07-02T00:49:07.485098722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xlrsh,Uid:51a1a8b7-ad51-4007-b8da-18b53e04a46e,Namespace:kube-system,Attempt:0,} returns sandbox id \"e018ad775d77466250e3a305d2e372e694cd1d68d706cf1a7d54e7b28d1300ec\"" Jul 2 00:49:07.486265 kubelet[2384]: E0702 00:49:07.485865 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:49:07.489077 containerd[1351]: time="2024-07-02T00:49:07.489009705Z" level=info msg="CreateContainer within sandbox \"e018ad775d77466250e3a305d2e372e694cd1d68d706cf1a7d54e7b28d1300ec\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 00:49:07.502944 containerd[1351]: time="2024-07-02T00:49:07.502891355Z" level=info msg="CreateContainer within sandbox \"e018ad775d77466250e3a305d2e372e694cd1d68d706cf1a7d54e7b28d1300ec\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"38316f0c680a09baeaf1ac5cebfc690b691438c94c9420b868d859ed502ddae0\"" Jul 2 00:49:07.503642 containerd[1351]: time="2024-07-02T00:49:07.503607189Z" level=info msg="StartContainer for \"38316f0c680a09baeaf1ac5cebfc690b691438c94c9420b868d859ed502ddae0\"" Jul 2 00:49:07.568363 containerd[1351]: time="2024-07-02T00:49:07.568292126Z" level=info msg="StartContainer for \"38316f0c680a09baeaf1ac5cebfc690b691438c94c9420b868d859ed502ddae0\" returns successfully" Jul 2 00:49:07.702000 audit[2627]: NETFILTER_CFG table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2627 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 00:49:07.702000 audit[2627]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffd2fdaa0 a2=0 a3=1 items=0 ppid=2587 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.708341 kernel: audit: type=1325 audit(1719881347.702:224): table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2627 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 00:49:07.708410 kernel: audit: type=1300 audit(1719881347.702:224): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffd2fdaa0 a2=0 a3=1 items=0 ppid=2587 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.708442 kernel: audit: type=1327 audit(1719881347.702:224): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 2 00:49:07.702000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 2 00:49:07.704000 audit[2628]: NETFILTER_CFG table=mangle:39 family=2 entries=1 op=nft_register_chain pid=2628 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:49:07.711128 kernel: audit: type=1325 audit(1719881347.704:225): table=mangle:39 family=2 entries=1 op=nft_register_chain pid=2628 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:49:07.704000 audit[2628]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdedf8a30 a2=0 a3=1 items=0 ppid=2587 pid=2628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.715279 kernel: audit: type=1300 audit(1719881347.704:225): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdedf8a30 a2=0 a3=1 items=0 ppid=2587 pid=2628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.704000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 2 00:49:07.717679 kernel: audit: type=1327 audit(1719881347.704:225): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 2 00:49:07.717722 kernel: audit: type=1325 audit(1719881347.705:226): table=nat:40 family=2 entries=1 op=nft_register_chain pid=2629 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:49:07.705000 audit[2629]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_chain pid=2629 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:49:07.705000 audit[2629]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc3496bf0 a2=0 a3=1 items=0 ppid=2587 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.721722 kernel: audit: type=1300 audit(1719881347.705:226): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc3496bf0 a2=0 a3=1 items=0 ppid=2587 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.721767 kernel: audit: type=1327 audit(1719881347.705:226): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 2 00:49:07.705000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 2 00:49:07.723012 kernel: audit: type=1325 audit(1719881347.706:227): table=filter:41 family=2 entries=1 op=nft_register_chain pid=2630 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:49:07.706000 audit[2630]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=2630 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:49:07.706000 audit[2630]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffeff879a0 a2=0 a3=1 items=0 ppid=2587 pid=2630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.706000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 2 00:49:07.712000 audit[2631]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=2631 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 00:49:07.712000 audit[2631]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff4e2d6f0 a2=0 a3=1 items=0 ppid=2587 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.712000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 2 00:49:07.716000 audit[2632]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2632 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 00:49:07.716000 audit[2632]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffce57a220 a2=0 a3=1 items=0 ppid=2587 pid=2632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.716000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 2 00:49:07.804000 audit[2633]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2633 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:49:07.804000 audit[2633]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffde742890 a2=0 a3=1 items=0 ppid=2587 pid=2633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.804000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jul 2 00:49:07.809000 audit[2635]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2635 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:49:07.809000 audit[2635]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffeccbd620 a2=0 a3=1 items=0 ppid=2587 pid=2635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.809000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jul 2 00:49:07.816000 audit[2638]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2638 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:49:07.816000 audit[2638]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffffba79620 a2=0 a3=1 items=0 ppid=2587 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.816000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jul 2 00:49:07.818000 audit[2639]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2639 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:49:07.818000 audit[2639]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdcb55570 a2=0 a3=1 items=0 ppid=2587 pid=2639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.818000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jul 2 00:49:07.821000 audit[2641]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2641 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:49:07.821000 audit[2641]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff6fea290 a2=0 a3=1 items=0 ppid=2587 pid=2641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.821000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jul 2 00:49:07.822000 audit[2642]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2642 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:49:07.822000 audit[2642]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff6d86a60 a2=0 a3=1 items=0 ppid=2587 pid=2642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.822000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jul 2 00:49:07.825000 audit[2644]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2644 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:49:07.825000 audit[2644]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff81b8930 a2=0 a3=1 items=0 ppid=2587 pid=2644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.825000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jul 2 00:49:07.829000 audit[2647]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2647 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:49:07.829000 audit[2647]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff6624c60 a2=0 a3=1 items=0 ppid=2587 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.829000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jul 2 00:49:07.830000 audit[2648]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2648 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:49:07.830000 audit[2648]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe0a29ed0 a2=0 a3=1 items=0 ppid=2587 pid=2648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.830000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jul 2 00:49:07.833000 audit[2650]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2650 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:49:07.833000 audit[2650]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff5b2cc90 a2=0 a3=1 items=0 ppid=2587 pid=2650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.833000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jul 2 00:49:07.835000 audit[2651]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2651 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:49:07.835000 audit[2651]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd49d98e0 a2=0 a3=1 items=0 ppid=2587 pid=2651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.835000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jul 2 00:49:07.838000 audit[2653]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2653 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:49:07.838000 audit[2653]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffda589a00 a2=0 a3=1 items=0 ppid=2587 pid=2653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.838000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 2 00:49:07.842000 audit[2656]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2656 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:49:07.842000 audit[2656]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe79680f0 a2=0 a3=1 items=0 ppid=2587 pid=2656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.842000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 2 00:49:07.846000 audit[2659]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2659 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:49:07.846000 audit[2659]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffce15e0e0 a2=0 a3=1 items=0 ppid=2587 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.846000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jul 2 00:49:07.847000 audit[2660]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2660 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:49:07.847000 audit[2660]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe1a16d60 a2=0 a3=1 items=0 ppid=2587 pid=2660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.847000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jul 2 00:49:07.850000 audit[2662]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2662 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:49:07.850000 audit[2662]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffd1bec950 a2=0 a3=1 items=0 ppid=2587 pid=2662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.850000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 2 00:49:07.854000 audit[2665]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2665 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:49:07.854000 audit[2665]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff3444620 a2=0 a3=1 items=0 ppid=2587 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.854000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 2 00:49:07.855000 audit[2666]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2666 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:49:07.855000 audit[2666]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe814c4a0 a2=0 a3=1 items=0 ppid=2587 pid=2666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.855000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jul 2 00:49:07.858000 audit[2668]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2668 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 00:49:07.858000 audit[2668]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffc20a9a30 a2=0 a3=1 items=0 ppid=2587 pid=2668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.858000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jul 2 00:49:07.877000 audit[2674]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2674 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 00:49:07.877000 audit[2674]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=ffffdf2186f0 a2=0 a3=1 items=0 ppid=2587 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.877000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 00:49:07.881000 audit[2674]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2674 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 00:49:07.881000 audit[2674]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffdf2186f0 a2=0 a3=1 items=0 ppid=2587 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.881000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 00:49:07.891000 audit[2681]: NETFILTER_CFG table=filter:65 family=2 entries=14 op=nft_register_rule pid=2681 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 00:49:07.891000 audit[2681]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=fffff9f152d0 a2=0 a3=1 items=0 ppid=2587 pid=2681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.891000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 00:49:07.900000 audit[2681]: NETFILTER_CFG table=nat:66 family=2 entries=12 op=nft_register_rule pid=2681 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 00:49:07.900000 audit[2681]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff9f152d0 a2=0 a3=1 items=0 ppid=2587 pid=2681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.900000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 00:49:07.912000 audit[2682]: NETFILTER_CFG table=filter:67 family=10 entries=1 op=nft_register_chain pid=2682 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 00:49:07.912000 audit[2682]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffe83d5820 a2=0 a3=1 items=0 ppid=2587 pid=2682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.912000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jul 2 00:49:07.915000 audit[2684]: NETFILTER_CFG table=filter:68 family=10 entries=2 op=nft_register_chain pid=2684 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 00:49:07.915000 audit[2684]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffd3824430 a2=0 a3=1 items=0 ppid=2587 pid=2684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.915000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jul 2 00:49:07.919000 audit[2687]: NETFILTER_CFG table=filter:69 family=10 entries=2 op=nft_register_chain pid=2687 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 00:49:07.919000 audit[2687]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffeb6da0c0 a2=0 a3=1 items=0 ppid=2587 pid=2687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.919000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jul 2 00:49:07.920000 audit[2688]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2688 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 00:49:07.920000 audit[2688]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd59a5770 a2=0 a3=1 items=0 ppid=2587 pid=2688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.920000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jul 2 00:49:07.923000 audit[2690]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2690 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 00:49:07.923000 audit[2690]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd1d2d7f0 a2=0 a3=1 items=0 ppid=2587 pid=2690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.923000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jul 2 00:49:07.924000 audit[2691]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_chain pid=2691 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 00:49:07.924000 audit[2691]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd27b7c30 a2=0 a3=1 items=0 ppid=2587 pid=2691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.924000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jul 2 00:49:07.927000 audit[2693]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=2693 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 00:49:07.927000 audit[2693]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd85df830 a2=0 a3=1 items=0 ppid=2587 pid=2693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.927000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jul 2 00:49:07.930000 audit[2696]: NETFILTER_CFG table=filter:74 family=10 entries=2 op=nft_register_chain pid=2696 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 00:49:07.930000 audit[2696]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffcabbb7d0 a2=0 a3=1 items=0 ppid=2587 pid=2696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.930000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jul 2 00:49:07.931000 audit[2697]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2697 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 00:49:07.931000 audit[2697]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe3f3c350 a2=0 a3=1 items=0 ppid=2587 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.931000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jul 2 00:49:07.935000 audit[2699]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2699 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 00:49:07.935000 audit[2699]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe9d7b100 a2=0 a3=1 items=0 ppid=2587 pid=2699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.935000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jul 2 00:49:07.936000 audit[2700]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_chain pid=2700 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 00:49:07.936000 audit[2700]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc6ffe9e0 a2=0 a3=1 items=0 ppid=2587 pid=2700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.936000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jul 2 00:49:07.939000 audit[2702]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2702 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 00:49:07.939000 audit[2702]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffff778c30 a2=0 a3=1 items=0 ppid=2587 pid=2702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.939000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 2 00:49:07.943000 audit[2705]: NETFILTER_CFG table=filter:79 family=10 entries=1 op=nft_register_rule pid=2705 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 00:49:07.943000 audit[2705]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff71d6e80 a2=0 a3=1 items=0 ppid=2587 pid=2705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.943000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jul 2 00:49:07.946000 audit[2708]: NETFILTER_CFG table=filter:80 family=10 entries=1 op=nft_register_rule pid=2708 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 00:49:07.946000 audit[2708]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc3cfcc10 a2=0 a3=1 items=0 ppid=2587 pid=2708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.946000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jul 2 00:49:07.947000 audit[2709]: NETFILTER_CFG table=nat:81 family=10 entries=1 op=nft_register_chain pid=2709 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 00:49:07.947000 audit[2709]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc6a7a150 a2=0 a3=1 items=0 ppid=2587 pid=2709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.947000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jul 2 00:49:07.950000 audit[2711]: NETFILTER_CFG table=nat:82 family=10 entries=2 op=nft_register_chain pid=2711 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 00:49:07.950000 audit[2711]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffe4ff03b0 a2=0 a3=1 items=0 ppid=2587 pid=2711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.950000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 2 00:49:07.953000 audit[2714]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2714 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 00:49:07.953000 audit[2714]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffd4d260d0 a2=0 a3=1 items=0 ppid=2587 pid=2714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.953000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 2 00:49:07.954000 audit[2715]: NETFILTER_CFG table=nat:84 family=10 entries=1 op=nft_register_chain pid=2715 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 00:49:07.954000 audit[2715]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd25db950 a2=0 a3=1 items=0 ppid=2587 pid=2715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.954000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jul 2 00:49:07.960000 audit[2717]: NETFILTER_CFG table=nat:85 family=10 entries=2 op=nft_register_chain pid=2717 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 00:49:07.960000 audit[2717]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffc2b7df40 a2=0 a3=1 items=0 ppid=2587 pid=2717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.960000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jul 2 00:49:07.961000 audit[2718]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=2718 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 00:49:07.961000 audit[2718]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe9762160 a2=0 a3=1 items=0 ppid=2587 pid=2718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.961000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jul 2 00:49:07.964000 audit[2720]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=2720 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 00:49:07.964000 audit[2720]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd63f5930 a2=0 a3=1 items=0 ppid=2587 pid=2720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.964000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 2 00:49:07.967000 audit[2723]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=2723 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 00:49:07.967000 audit[2723]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffff7e7440 a2=0 a3=1 items=0 ppid=2587 pid=2723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.967000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 2 00:49:07.970000 audit[2725]: NETFILTER_CFG table=filter:89 family=10 entries=3 op=nft_register_rule pid=2725 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jul 2 00:49:07.970000 audit[2725]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2004 a0=3 a1=fffff3a71840 a2=0 a3=1 items=0 ppid=2587 pid=2725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.970000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 00:49:07.971000 audit[2725]: NETFILTER_CFG table=nat:90 family=10 entries=7 op=nft_register_chain pid=2725 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jul 2 00:49:07.971000 audit[2725]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=fffff3a71840 a2=0 a3=1 items=0 ppid=2587 pid=2725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:07.971000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 00:49:08.013935 kubelet[2384]: E0702 00:49:08.013896 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:49:08.023839 kubelet[2384]: I0702 00:49:08.023799 2384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-xlrsh" podStartSLOduration=1.023764992 podCreationTimestamp="2024-07-02 00:49:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:49:08.022533725 +0000 UTC m=+16.167533895" watchObservedRunningTime="2024-07-02 00:49:08.023764992 +0000 UTC m=+16.168765162" Jul 2 00:49:08.625733 containerd[1351]: time="2024-07-02T00:49:08.625683216Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:49:08.626885 containerd[1351]: time="2024-07-02T00:49:08.626846713Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=19473634" Jul 2 00:49:08.627821 containerd[1351]: time="2024-07-02T00:49:08.627787096Z" level=info msg="ImageCreate event name:\"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:49:08.629460 containerd[1351]: time="2024-07-02T00:49:08.629417264Z" level=info msg="ImageUpdate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:49:08.630972 containerd[1351]: time="2024-07-02T00:49:08.630935855Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:49:08.632233 containerd[1351]: time="2024-07-02T00:49:08.631861915Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"19467821\" in 1.172955122s" Jul 2 00:49:08.632294 containerd[1351]: time="2024-07-02T00:49:08.632233692Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\"" Jul 2 00:49:08.639040 containerd[1351]: time="2024-07-02T00:49:08.639005081Z" level=info msg="CreateContainer within sandbox \"68698f949b2f199b43c1e03154e49711d3e9dd38343c16ecb1dba00c8320c2d2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 2 00:49:08.672251 containerd[1351]: time="2024-07-02T00:49:08.672192084Z" level=info msg="CreateContainer within sandbox \"68698f949b2f199b43c1e03154e49711d3e9dd38343c16ecb1dba00c8320c2d2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"279e24a61b757458f5d598a915e767ecd3423150a5c24b96932a6ffcb0f6d294\"" Jul 2 00:49:08.673551 containerd[1351]: time="2024-07-02T00:49:08.673017689Z" level=info msg="StartContainer for \"279e24a61b757458f5d598a915e767ecd3423150a5c24b96932a6ffcb0f6d294\"" Jul 2 00:49:08.733842 containerd[1351]: time="2024-07-02T00:49:08.733795525Z" level=info msg="StartContainer for \"279e24a61b757458f5d598a915e767ecd3423150a5c24b96932a6ffcb0f6d294\" returns successfully" Jul 2 00:49:09.057147 kubelet[2384]: I0702 00:49:09.057030 2384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-zpp7r" podStartSLOduration=0.879162705 podCreationTimestamp="2024-07-02 00:49:07 +0000 UTC" firstStartedPulling="2024-07-02 00:49:07.457371108 +0000 UTC m=+15.602371278" lastFinishedPulling="2024-07-02 00:49:08.635197182 +0000 UTC m=+16.780197352" observedRunningTime="2024-07-02 00:49:09.056505389 +0000 UTC m=+17.201505559" watchObservedRunningTime="2024-07-02 00:49:09.056988779 +0000 UTC m=+17.201988909" Jul 2 00:49:09.355890 systemd[1]: run-containerd-runc-k8s.io-279e24a61b757458f5d598a915e767ecd3423150a5c24b96932a6ffcb0f6d294-runc.oow5VH.mount: Deactivated successfully. Jul 2 00:49:12.199000 audit[2774]: NETFILTER_CFG table=filter:91 family=2 entries=15 op=nft_register_rule pid=2774 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 00:49:12.199000 audit[2774]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffe420cfb0 a2=0 a3=1 items=0 ppid=2587 pid=2774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:12.199000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 00:49:12.201000 audit[2774]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2774 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 00:49:12.201000 audit[2774]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe420cfb0 a2=0 a3=1 items=0 ppid=2587 pid=2774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:12.201000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 00:49:12.211000 audit[2776]: NETFILTER_CFG table=filter:93 family=2 entries=16 op=nft_register_rule pid=2776 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 00:49:12.211000 audit[2776]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffe43953c0 a2=0 a3=1 items=0 ppid=2587 pid=2776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:12.211000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 00:49:12.220000 audit[2776]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2776 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 00:49:12.220000 audit[2776]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe43953c0 a2=0 a3=1 items=0 ppid=2587 pid=2776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:12.220000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 00:49:12.338747 kubelet[2384]: I0702 00:49:12.338713 2384 topology_manager.go:215] "Topology Admit Handler" podUID="9398f003-52d5-45a6-95ad-5dfddd33e7db" podNamespace="calico-system" podName="calico-typha-57dc64fcc8-mc2w4" Jul 2 00:49:12.379803 kubelet[2384]: I0702 00:49:12.379759 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9398f003-52d5-45a6-95ad-5dfddd33e7db-tigera-ca-bundle\") pod \"calico-typha-57dc64fcc8-mc2w4\" (UID: \"9398f003-52d5-45a6-95ad-5dfddd33e7db\") " pod="calico-system/calico-typha-57dc64fcc8-mc2w4" Jul 2 00:49:12.379803 kubelet[2384]: I0702 00:49:12.379811 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6cxf\" (UniqueName: \"kubernetes.io/projected/9398f003-52d5-45a6-95ad-5dfddd33e7db-kube-api-access-r6cxf\") pod \"calico-typha-57dc64fcc8-mc2w4\" (UID: \"9398f003-52d5-45a6-95ad-5dfddd33e7db\") " pod="calico-system/calico-typha-57dc64fcc8-mc2w4" Jul 2 00:49:12.379982 kubelet[2384]: I0702 00:49:12.379833 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9398f003-52d5-45a6-95ad-5dfddd33e7db-typha-certs\") pod \"calico-typha-57dc64fcc8-mc2w4\" (UID: \"9398f003-52d5-45a6-95ad-5dfddd33e7db\") " pod="calico-system/calico-typha-57dc64fcc8-mc2w4" Jul 2 00:49:12.387670 kubelet[2384]: I0702 00:49:12.387632 2384 topology_manager.go:215] "Topology Admit Handler" podUID="d692c258-71e5-4c19-b990-e0874b6f73b9" podNamespace="calico-system" podName="calico-node-57wdn" Jul 2 00:49:12.480198 kubelet[2384]: I0702 00:49:12.480078 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d692c258-71e5-4c19-b990-e0874b6f73b9-lib-modules\") pod \"calico-node-57wdn\" (UID: \"d692c258-71e5-4c19-b990-e0874b6f73b9\") " pod="calico-system/calico-node-57wdn" Jul 2 00:49:12.480198 kubelet[2384]: I0702 00:49:12.480123 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d692c258-71e5-4c19-b990-e0874b6f73b9-policysync\") pod \"calico-node-57wdn\" (UID: \"d692c258-71e5-4c19-b990-e0874b6f73b9\") " pod="calico-system/calico-node-57wdn" Jul 2 00:49:12.480198 kubelet[2384]: I0702 00:49:12.480144 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d692c258-71e5-4c19-b990-e0874b6f73b9-flexvol-driver-host\") pod \"calico-node-57wdn\" (UID: \"d692c258-71e5-4c19-b990-e0874b6f73b9\") " pod="calico-system/calico-node-57wdn" Jul 2 00:49:12.480198 kubelet[2384]: I0702 00:49:12.480164 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn7fp\" (UniqueName: \"kubernetes.io/projected/d692c258-71e5-4c19-b990-e0874b6f73b9-kube-api-access-xn7fp\") pod \"calico-node-57wdn\" (UID: \"d692c258-71e5-4c19-b990-e0874b6f73b9\") " pod="calico-system/calico-node-57wdn" Jul 2 00:49:12.481796 kubelet[2384]: I0702 00:49:12.480183 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d692c258-71e5-4c19-b990-e0874b6f73b9-cni-bin-dir\") pod \"calico-node-57wdn\" (UID: \"d692c258-71e5-4c19-b990-e0874b6f73b9\") " pod="calico-system/calico-node-57wdn" Jul 2 00:49:12.481893 kubelet[2384]: I0702 00:49:12.481820 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d692c258-71e5-4c19-b990-e0874b6f73b9-cni-net-dir\") pod \"calico-node-57wdn\" (UID: \"d692c258-71e5-4c19-b990-e0874b6f73b9\") " pod="calico-system/calico-node-57wdn" Jul 2 00:49:12.481893 kubelet[2384]: I0702 00:49:12.481846 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d692c258-71e5-4c19-b990-e0874b6f73b9-cni-log-dir\") pod \"calico-node-57wdn\" (UID: \"d692c258-71e5-4c19-b990-e0874b6f73b9\") " pod="calico-system/calico-node-57wdn" Jul 2 00:49:12.481970 kubelet[2384]: I0702 00:49:12.481948 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d692c258-71e5-4c19-b990-e0874b6f73b9-tigera-ca-bundle\") pod \"calico-node-57wdn\" (UID: \"d692c258-71e5-4c19-b990-e0874b6f73b9\") " pod="calico-system/calico-node-57wdn" Jul 2 00:49:12.482016 kubelet[2384]: I0702 00:49:12.482001 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d692c258-71e5-4c19-b990-e0874b6f73b9-node-certs\") pod \"calico-node-57wdn\" (UID: \"d692c258-71e5-4c19-b990-e0874b6f73b9\") " pod="calico-system/calico-node-57wdn" Jul 2 00:49:12.482051 kubelet[2384]: I0702 00:49:12.482027 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d692c258-71e5-4c19-b990-e0874b6f73b9-var-run-calico\") pod \"calico-node-57wdn\" (UID: \"d692c258-71e5-4c19-b990-e0874b6f73b9\") " pod="calico-system/calico-node-57wdn" Jul 2 00:49:12.482076 kubelet[2384]: I0702 00:49:12.482061 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d692c258-71e5-4c19-b990-e0874b6f73b9-var-lib-calico\") pod \"calico-node-57wdn\" (UID: \"d692c258-71e5-4c19-b990-e0874b6f73b9\") " pod="calico-system/calico-node-57wdn" Jul 2 00:49:12.482206 kubelet[2384]: I0702 00:49:12.482187 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d692c258-71e5-4c19-b990-e0874b6f73b9-xtables-lock\") pod \"calico-node-57wdn\" (UID: \"d692c258-71e5-4c19-b990-e0874b6f73b9\") " pod="calico-system/calico-node-57wdn" Jul 2 00:49:12.505685 kubelet[2384]: I0702 00:49:12.505644 2384 topology_manager.go:215] "Topology Admit Handler" podUID="565b958b-5d9c-4fe5-96de-2157ed8f17c7" podNamespace="calico-system" podName="csi-node-driver-dczfn" Jul 2 00:49:12.505921 kubelet[2384]: E0702 00:49:12.505902 2384 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dczfn" podUID="565b958b-5d9c-4fe5-96de-2157ed8f17c7" Jul 2 00:49:12.585888 kubelet[2384]: I0702 00:49:12.585841 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/565b958b-5d9c-4fe5-96de-2157ed8f17c7-socket-dir\") pod \"csi-node-driver-dczfn\" (UID: \"565b958b-5d9c-4fe5-96de-2157ed8f17c7\") " pod="calico-system/csi-node-driver-dczfn" Jul 2 00:49:12.586123 kubelet[2384]: I0702 00:49:12.586108 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/565b958b-5d9c-4fe5-96de-2157ed8f17c7-varrun\") pod \"csi-node-driver-dczfn\" (UID: \"565b958b-5d9c-4fe5-96de-2157ed8f17c7\") " pod="calico-system/csi-node-driver-dczfn" Jul 2 00:49:12.586269 kubelet[2384]: I0702 00:49:12.586231 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snxm8\" (UniqueName: \"kubernetes.io/projected/565b958b-5d9c-4fe5-96de-2157ed8f17c7-kube-api-access-snxm8\") pod \"csi-node-driver-dczfn\" (UID: \"565b958b-5d9c-4fe5-96de-2157ed8f17c7\") " pod="calico-system/csi-node-driver-dczfn" Jul 2 00:49:12.587855 kubelet[2384]: E0702 00:49:12.587813 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.587855 kubelet[2384]: W0702 00:49:12.587838 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.588406 kubelet[2384]: E0702 00:49:12.588388 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.588498 kubelet[2384]: E0702 00:49:12.588476 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.588556 kubelet[2384]: W0702 00:49:12.588485 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.588627 kubelet[2384]: E0702 00:49:12.588616 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.588945 kubelet[2384]: E0702 00:49:12.588929 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.588945 kubelet[2384]: W0702 00:49:12.588942 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.589039 kubelet[2384]: E0702 00:49:12.588963 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.589140 kubelet[2384]: E0702 00:49:12.589117 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.589140 kubelet[2384]: W0702 00:49:12.589128 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.589209 kubelet[2384]: E0702 00:49:12.589182 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.589287 kubelet[2384]: E0702 00:49:12.589270 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.589287 kubelet[2384]: W0702 00:49:12.589284 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.589382 kubelet[2384]: E0702 00:49:12.589364 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.589460 kubelet[2384]: E0702 00:49:12.589445 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.589460 kubelet[2384]: W0702 00:49:12.589457 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.589524 kubelet[2384]: E0702 00:49:12.589493 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.589628 kubelet[2384]: E0702 00:49:12.589617 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.589628 kubelet[2384]: W0702 00:49:12.589628 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.589738 kubelet[2384]: E0702 00:49:12.589720 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.589831 kubelet[2384]: E0702 00:49:12.589776 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.589908 kubelet[2384]: W0702 00:49:12.589895 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.590036 kubelet[2384]: E0702 00:49:12.590015 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.590213 kubelet[2384]: E0702 00:49:12.590202 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.590301 kubelet[2384]: W0702 00:49:12.590289 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.590426 kubelet[2384]: E0702 00:49:12.590410 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.590590 kubelet[2384]: E0702 00:49:12.590578 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.590652 kubelet[2384]: W0702 00:49:12.590641 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.590746 kubelet[2384]: E0702 00:49:12.590731 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.590917 kubelet[2384]: E0702 00:49:12.590905 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.590985 kubelet[2384]: W0702 00:49:12.590974 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.591095 kubelet[2384]: E0702 00:49:12.591079 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.591278 kubelet[2384]: E0702 00:49:12.591265 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.591343 kubelet[2384]: W0702 00:49:12.591332 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.591458 kubelet[2384]: E0702 00:49:12.591442 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.591621 kubelet[2384]: E0702 00:49:12.591610 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.591681 kubelet[2384]: W0702 00:49:12.591670 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.591839 kubelet[2384]: E0702 00:49:12.591822 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.592021 kubelet[2384]: E0702 00:49:12.592009 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.592092 kubelet[2384]: W0702 00:49:12.592080 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.592196 kubelet[2384]: E0702 00:49:12.592179 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.592399 kubelet[2384]: E0702 00:49:12.592386 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.592479 kubelet[2384]: W0702 00:49:12.592466 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.592634 kubelet[2384]: E0702 00:49:12.592610 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.592788 kubelet[2384]: E0702 00:49:12.592775 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.592849 kubelet[2384]: W0702 00:49:12.592838 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.592972 kubelet[2384]: E0702 00:49:12.592956 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.593139 kubelet[2384]: E0702 00:49:12.593127 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.593205 kubelet[2384]: W0702 00:49:12.593194 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.593344 kubelet[2384]: E0702 00:49:12.593324 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.593528 kubelet[2384]: E0702 00:49:12.593514 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.593601 kubelet[2384]: W0702 00:49:12.593588 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.593705 kubelet[2384]: E0702 00:49:12.593689 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.593750 kubelet[2384]: I0702 00:49:12.593718 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/565b958b-5d9c-4fe5-96de-2157ed8f17c7-kubelet-dir\") pod \"csi-node-driver-dczfn\" (UID: \"565b958b-5d9c-4fe5-96de-2157ed8f17c7\") " pod="calico-system/csi-node-driver-dczfn" Jul 2 00:49:12.593945 kubelet[2384]: E0702 00:49:12.593933 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.594019 kubelet[2384]: W0702 00:49:12.594007 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.594169 kubelet[2384]: E0702 00:49:12.594158 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.594355 kubelet[2384]: E0702 00:49:12.594343 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.594488 kubelet[2384]: W0702 00:49:12.594473 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.594668 kubelet[2384]: E0702 00:49:12.594654 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.594779 kubelet[2384]: E0702 00:49:12.594769 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.594845 kubelet[2384]: W0702 00:49:12.594833 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.595000 kubelet[2384]: E0702 00:49:12.594989 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.595343 kubelet[2384]: E0702 00:49:12.595326 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.595434 kubelet[2384]: W0702 00:49:12.595419 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.595560 kubelet[2384]: E0702 00:49:12.595535 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.595667 kubelet[2384]: E0702 00:49:12.595657 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.595734 kubelet[2384]: W0702 00:49:12.595724 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.595862 kubelet[2384]: E0702 00:49:12.595841 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.596068 kubelet[2384]: E0702 00:49:12.596056 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.596134 kubelet[2384]: W0702 00:49:12.596122 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.596323 kubelet[2384]: E0702 00:49:12.596308 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.596432 kubelet[2384]: I0702 00:49:12.596421 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/565b958b-5d9c-4fe5-96de-2157ed8f17c7-registration-dir\") pod \"csi-node-driver-dczfn\" (UID: \"565b958b-5d9c-4fe5-96de-2157ed8f17c7\") " pod="calico-system/csi-node-driver-dczfn" Jul 2 00:49:12.596586 kubelet[2384]: E0702 00:49:12.596574 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.596657 kubelet[2384]: W0702 00:49:12.596644 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.596797 kubelet[2384]: E0702 00:49:12.596785 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.596927 kubelet[2384]: E0702 00:49:12.596915 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.596995 kubelet[2384]: W0702 00:49:12.596983 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.597080 kubelet[2384]: E0702 00:49:12.597069 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.597373 kubelet[2384]: E0702 00:49:12.597360 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.597462 kubelet[2384]: W0702 00:49:12.597449 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.597604 kubelet[2384]: E0702 00:49:12.597593 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.597737 kubelet[2384]: E0702 00:49:12.597725 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.597802 kubelet[2384]: W0702 00:49:12.597791 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.597954 kubelet[2384]: E0702 00:49:12.597944 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.622286 kubelet[2384]: E0702 00:49:12.622259 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.622431 kubelet[2384]: W0702 00:49:12.622414 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.622527 kubelet[2384]: E0702 00:49:12.622517 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.636402 kubelet[2384]: E0702 00:49:12.636362 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.636595 kubelet[2384]: W0702 00:49:12.636577 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.636671 kubelet[2384]: E0702 00:49:12.636661 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.637088 kubelet[2384]: E0702 00:49:12.637074 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.637169 kubelet[2384]: W0702 00:49:12.637157 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.637846 kubelet[2384]: E0702 00:49:12.637816 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.637992 kubelet[2384]: E0702 00:49:12.637979 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.638051 kubelet[2384]: W0702 00:49:12.638040 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.638132 kubelet[2384]: E0702 00:49:12.638118 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.638395 kubelet[2384]: E0702 00:49:12.638383 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.638493 kubelet[2384]: W0702 00:49:12.638474 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.638595 kubelet[2384]: E0702 00:49:12.638572 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.639117 kubelet[2384]: E0702 00:49:12.639103 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.639215 kubelet[2384]: W0702 00:49:12.639202 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.639313 kubelet[2384]: E0702 00:49:12.639297 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.639523 kubelet[2384]: E0702 00:49:12.639511 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.639592 kubelet[2384]: W0702 00:49:12.639581 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.639658 kubelet[2384]: E0702 00:49:12.639649 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.643388 kubelet[2384]: E0702 00:49:12.643361 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:49:12.650757 containerd[1351]: time="2024-07-02T00:49:12.650709954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57dc64fcc8-mc2w4,Uid:9398f003-52d5-45a6-95ad-5dfddd33e7db,Namespace:calico-system,Attempt:0,}" Jul 2 00:49:12.697338 kubelet[2384]: E0702 00:49:12.697311 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.697338 kubelet[2384]: W0702 00:49:12.697333 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.697523 kubelet[2384]: E0702 00:49:12.697354 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.697565 kubelet[2384]: E0702 00:49:12.697548 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.697565 kubelet[2384]: W0702 00:49:12.697562 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.697624 kubelet[2384]: E0702 00:49:12.697582 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.697830 kubelet[2384]: E0702 00:49:12.697802 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.697830 kubelet[2384]: W0702 00:49:12.697815 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.697920 kubelet[2384]: E0702 00:49:12.697834 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.698087 kubelet[2384]: E0702 00:49:12.698075 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.698121 kubelet[2384]: W0702 00:49:12.698088 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.698121 kubelet[2384]: E0702 00:49:12.698106 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.698335 kubelet[2384]: E0702 00:49:12.698320 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.698335 kubelet[2384]: W0702 00:49:12.698335 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.698418 kubelet[2384]: E0702 00:49:12.698354 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.698583 kubelet[2384]: E0702 00:49:12.698567 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.698583 kubelet[2384]: W0702 00:49:12.698582 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.698656 kubelet[2384]: E0702 00:49:12.698599 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.699358 kubelet[2384]: E0702 00:49:12.699342 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.699440 kubelet[2384]: W0702 00:49:12.699427 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.699574 kubelet[2384]: E0702 00:49:12.699551 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.699698 kubelet[2384]: E0702 00:49:12.699685 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.699766 kubelet[2384]: W0702 00:49:12.699755 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.699896 kubelet[2384]: E0702 00:49:12.699869 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.700126 kubelet[2384]: E0702 00:49:12.700114 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.700208 kubelet[2384]: W0702 00:49:12.700196 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.700408 kubelet[2384]: E0702 00:49:12.700389 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.700602 kubelet[2384]: E0702 00:49:12.700588 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:49:12.701204 kubelet[2384]: E0702 00:49:12.701189 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.701314 kubelet[2384]: W0702 00:49:12.701300 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.701412 containerd[1351]: time="2024-07-02T00:49:12.701366561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-57wdn,Uid:d692c258-71e5-4c19-b990-e0874b6f73b9,Namespace:calico-system,Attempt:0,}" Jul 2 00:49:12.701486 kubelet[2384]: E0702 00:49:12.701475 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.701806 kubelet[2384]: E0702 00:49:12.701791 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.701910 kubelet[2384]: W0702 00:49:12.701894 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.702068 kubelet[2384]: E0702 00:49:12.702047 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.702201 kubelet[2384]: E0702 00:49:12.702189 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.702307 kubelet[2384]: W0702 00:49:12.702294 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.702455 kubelet[2384]: E0702 00:49:12.702429 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.702579 kubelet[2384]: E0702 00:49:12.702567 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.702648 kubelet[2384]: W0702 00:49:12.702636 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.702771 kubelet[2384]: E0702 00:49:12.702756 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.703088 kubelet[2384]: E0702 00:49:12.703071 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.703213 kubelet[2384]: W0702 00:49:12.703199 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.703440 kubelet[2384]: E0702 00:49:12.703414 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.703562 kubelet[2384]: E0702 00:49:12.703550 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.703622 kubelet[2384]: W0702 00:49:12.703610 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.703879 kubelet[2384]: E0702 00:49:12.703860 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.704188 kubelet[2384]: E0702 00:49:12.704173 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.704317 kubelet[2384]: W0702 00:49:12.704294 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.704477 kubelet[2384]: E0702 00:49:12.704467 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.704774 kubelet[2384]: E0702 00:49:12.704760 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.704856 kubelet[2384]: W0702 00:49:12.704843 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.704960 kubelet[2384]: E0702 00:49:12.704941 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.705183 kubelet[2384]: E0702 00:49:12.705171 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.705312 kubelet[2384]: W0702 00:49:12.705297 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.705411 kubelet[2384]: E0702 00:49:12.705397 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.705643 kubelet[2384]: E0702 00:49:12.705631 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.705719 kubelet[2384]: W0702 00:49:12.705707 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.705802 kubelet[2384]: E0702 00:49:12.705790 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.706054 kubelet[2384]: E0702 00:49:12.706041 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.706137 kubelet[2384]: W0702 00:49:12.706122 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.706292 kubelet[2384]: E0702 00:49:12.706280 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.706546 kubelet[2384]: E0702 00:49:12.706534 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.706618 kubelet[2384]: W0702 00:49:12.706606 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.706759 kubelet[2384]: E0702 00:49:12.706746 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.707045 kubelet[2384]: E0702 00:49:12.707033 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.707131 kubelet[2384]: W0702 00:49:12.707118 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.707297 kubelet[2384]: E0702 00:49:12.707285 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.707610 kubelet[2384]: E0702 00:49:12.707596 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.707700 kubelet[2384]: W0702 00:49:12.707687 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.707768 kubelet[2384]: E0702 00:49:12.707759 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.708050 kubelet[2384]: E0702 00:49:12.708037 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.708190 kubelet[2384]: W0702 00:49:12.708174 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.708289 kubelet[2384]: E0702 00:49:12.708277 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.722061 kubelet[2384]: E0702 00:49:12.722037 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.722217 kubelet[2384]: W0702 00:49:12.722202 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.722316 kubelet[2384]: E0702 00:49:12.722304 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.741494 kubelet[2384]: E0702 00:49:12.741392 2384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:49:12.741494 kubelet[2384]: W0702 00:49:12.741431 2384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:49:12.741494 kubelet[2384]: E0702 00:49:12.741454 2384 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:49:12.779420 containerd[1351]: time="2024-07-02T00:49:12.779312880Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:49:12.779608 containerd[1351]: time="2024-07-02T00:49:12.779372207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:49:12.779608 containerd[1351]: time="2024-07-02T00:49:12.779484381Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:49:12.779608 containerd[1351]: time="2024-07-02T00:49:12.779500423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:49:12.783775 containerd[1351]: time="2024-07-02T00:49:12.783677675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:49:12.783775 containerd[1351]: time="2024-07-02T00:49:12.783726681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:49:12.783775 containerd[1351]: time="2024-07-02T00:49:12.783752564Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:49:12.785086 containerd[1351]: time="2024-07-02T00:49:12.784423290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:49:12.839390 containerd[1351]: time="2024-07-02T00:49:12.839327477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57dc64fcc8-mc2w4,Uid:9398f003-52d5-45a6-95ad-5dfddd33e7db,Namespace:calico-system,Attempt:0,} returns sandbox id \"a598838ce936e7240fa226d6d1954a1531868edf45052e3e7dd208ddfeaea902\"" Jul 2 00:49:12.839997 kubelet[2384]: E0702 00:49:12.839972 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:49:12.843349 containerd[1351]: time="2024-07-02T00:49:12.842575490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jul 2 00:49:12.848463 containerd[1351]: time="2024-07-02T00:49:12.848426394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-57wdn,Uid:d692c258-71e5-4c19-b990-e0874b6f73b9,Namespace:calico-system,Attempt:0,} returns sandbox id \"8ea60b804ea7cec16a41b7d40960214f566cc779889d1a7d050aeb8912c6d91b\"" Jul 2 00:49:12.849055 kubelet[2384]: E0702 00:49:12.849018 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:49:13.235000 audit[2929]: NETFILTER_CFG table=filter:95 family=2 entries=16 op=nft_register_rule pid=2929 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 00:49:13.238473 kernel: kauditd_printk_skb: 161 callbacks suppressed Jul 2 00:49:13.238537 kernel: audit: type=1325 audit(1719881353.235:281): table=filter:95 family=2 entries=16 op=nft_register_rule pid=2929 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 00:49:13.238568 kernel: audit: type=1300 audit(1719881353.235:281): arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=fffff9b6bc00 a2=0 a3=1 items=0 ppid=2587 pid=2929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:13.235000 audit[2929]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=fffff9b6bc00 a2=0 a3=1 items=0 ppid=2587 pid=2929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:13.235000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 00:49:13.242671 kernel: audit: type=1327 audit(1719881353.235:281): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 00:49:13.236000 audit[2929]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2929 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 00:49:13.236000 audit[2929]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff9b6bc00 a2=0 a3=1 items=0 ppid=2587 pid=2929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:13.252029 kernel: audit: type=1325 audit(1719881353.236:282): table=nat:96 family=2 entries=12 op=nft_register_rule pid=2929 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 00:49:13.252078 kernel: audit: type=1300 audit(1719881353.236:282): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff9b6bc00 a2=0 a3=1 items=0 ppid=2587 pid=2929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:13.252102 kernel: audit: type=1327 audit(1719881353.236:282): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 00:49:13.236000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 00:49:13.977160 kubelet[2384]: E0702 00:49:13.977127 2384 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dczfn" podUID="565b958b-5d9c-4fe5-96de-2157ed8f17c7" Jul 2 00:49:14.038264 containerd[1351]: time="2024-07-02T00:49:14.038203810Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:49:14.038698 containerd[1351]: time="2024-07-02T00:49:14.038663544Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=27476513" Jul 2 00:49:14.039418 containerd[1351]: time="2024-07-02T00:49:14.039391029Z" level=info msg="ImageCreate event name:\"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:49:14.040913 containerd[1351]: time="2024-07-02T00:49:14.040880723Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:49:14.042547 containerd[1351]: time="2024-07-02T00:49:14.042510554Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:49:14.043205 containerd[1351]: time="2024-07-02T00:49:14.043176672Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"28843073\" in 1.200418079s" Jul 2 00:49:14.043286 containerd[1351]: time="2024-07-02T00:49:14.043210556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\"" Jul 2 00:49:14.044529 containerd[1351]: time="2024-07-02T00:49:14.044468663Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jul 2 00:49:14.054805 containerd[1351]: time="2024-07-02T00:49:14.054764029Z" level=info msg="CreateContainer within sandbox \"a598838ce936e7240fa226d6d1954a1531868edf45052e3e7dd208ddfeaea902\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 2 00:49:14.064414 containerd[1351]: time="2024-07-02T00:49:14.064372394Z" level=info msg="CreateContainer within sandbox \"a598838ce936e7240fa226d6d1954a1531868edf45052e3e7dd208ddfeaea902\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c5d2068fe34abcbee6863e507541182c84ffd418070f3a80fafe1868cc1b1ac2\"" Jul 2 00:49:14.068987 containerd[1351]: time="2024-07-02T00:49:14.068946130Z" level=info msg="StartContainer for \"c5d2068fe34abcbee6863e507541182c84ffd418070f3a80fafe1868cc1b1ac2\"" Jul 2 00:49:14.132150 containerd[1351]: time="2024-07-02T00:49:14.132101685Z" level=info msg="StartContainer for \"c5d2068fe34abcbee6863e507541182c84ffd418070f3a80fafe1868cc1b1ac2\" returns successfully" Jul 2 00:49:14.976354 containerd[1351]: time="2024-07-02T00:49:14.972995031Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:49:14.979899 containerd[1351]: time="2024-07-02T00:49:14.978133952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=4916009" Jul 2 00:49:14.979899 containerd[1351]: time="2024-07-02T00:49:14.979152272Z" level=info msg="ImageCreate event name:\"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:49:14.980946 containerd[1351]: time="2024-07-02T00:49:14.980914558Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:49:14.983269 containerd[1351]: time="2024-07-02T00:49:14.983215187Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6282537\" in 938.353198ms" Jul 2 00:49:14.983337 containerd[1351]: time="2024-07-02T00:49:14.983271074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\"" Jul 2 00:49:14.984720 containerd[1351]: time="2024-07-02T00:49:14.984690280Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:49:14.986633 containerd[1351]: time="2024-07-02T00:49:14.986131489Z" level=info msg="CreateContainer within sandbox \"8ea60b804ea7cec16a41b7d40960214f566cc779889d1a7d050aeb8912c6d91b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 00:49:15.010572 containerd[1351]: time="2024-07-02T00:49:15.010509021Z" level=info msg="CreateContainer within sandbox \"8ea60b804ea7cec16a41b7d40960214f566cc779889d1a7d050aeb8912c6d91b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4f3403a16c3b207d3863ec8ed208795ea21bf47c8abfe186c381cb2f6e6dfa13\"" Jul 2 00:49:15.011352 containerd[1351]: time="2024-07-02T00:49:15.011318752Z" level=info msg="StartContainer for \"4f3403a16c3b207d3863ec8ed208795ea21bf47c8abfe186c381cb2f6e6dfa13\"" Jul 2 00:49:15.049217 kubelet[2384]: E0702 00:49:15.048762 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:49:15.060092 kubelet[2384]: I0702 00:49:15.060062 2384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-57dc64fcc8-mc2w4" podStartSLOduration=1.858237102 podCreationTimestamp="2024-07-02 00:49:12 +0000 UTC" firstStartedPulling="2024-07-02 00:49:12.841811753 +0000 UTC m=+20.986811923" lastFinishedPulling="2024-07-02 00:49:14.043595601 +0000 UTC m=+22.188595771" observedRunningTime="2024-07-02 00:49:15.059461087 +0000 UTC m=+23.204461337" watchObservedRunningTime="2024-07-02 00:49:15.06002095 +0000 UTC m=+23.205021120" Jul 2 00:49:15.078393 containerd[1351]: time="2024-07-02T00:49:15.078333730Z" level=info msg="StartContainer for \"4f3403a16c3b207d3863ec8ed208795ea21bf47c8abfe186c381cb2f6e6dfa13\" returns successfully" Jul 2 00:49:15.224452 containerd[1351]: time="2024-07-02T00:49:15.224335714Z" level=info msg="shim disconnected" id=4f3403a16c3b207d3863ec8ed208795ea21bf47c8abfe186c381cb2f6e6dfa13 namespace=k8s.io Jul 2 00:49:15.224452 containerd[1351]: time="2024-07-02T00:49:15.224389520Z" level=warning msg="cleaning up after shim disconnected" id=4f3403a16c3b207d3863ec8ed208795ea21bf47c8abfe186c381cb2f6e6dfa13 namespace=k8s.io Jul 2 00:49:15.224452 containerd[1351]: time="2024-07-02T00:49:15.224398321Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:49:15.499092 systemd[1]: run-containerd-runc-k8s.io-4f3403a16c3b207d3863ec8ed208795ea21bf47c8abfe186c381cb2f6e6dfa13-runc.LJUMjV.mount: Deactivated successfully. Jul 2 00:49:15.499254 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f3403a16c3b207d3863ec8ed208795ea21bf47c8abfe186c381cb2f6e6dfa13-rootfs.mount: Deactivated successfully. Jul 2 00:49:15.977289 kubelet[2384]: E0702 00:49:15.977107 2384 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dczfn" podUID="565b958b-5d9c-4fe5-96de-2157ed8f17c7" Jul 2 00:49:16.053954 kubelet[2384]: I0702 00:49:16.053922 2384 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 00:49:16.054664 kubelet[2384]: E0702 00:49:16.054636 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:49:16.055127 kubelet[2384]: E0702 00:49:16.055107 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:49:16.055992 containerd[1351]: time="2024-07-02T00:49:16.055953506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jul 2 00:49:17.979597 kubelet[2384]: E0702 00:49:17.976713 2384 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dczfn" podUID="565b958b-5d9c-4fe5-96de-2157ed8f17c7" Jul 2 00:49:18.359836 containerd[1351]: time="2024-07-02T00:49:18.359554426Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:49:18.361160 containerd[1351]: time="2024-07-02T00:49:18.361123944Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=86799715" Jul 2 00:49:18.362254 containerd[1351]: time="2024-07-02T00:49:18.362200852Z" level=info msg="ImageCreate event name:\"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:49:18.364531 containerd[1351]: time="2024-07-02T00:49:18.364478600Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:49:18.367584 containerd[1351]: time="2024-07-02T00:49:18.367504144Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:49:18.368267 containerd[1351]: time="2024-07-02T00:49:18.368084082Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"88166283\" in 2.312088971s" Jul 2 00:49:18.368267 containerd[1351]: time="2024-07-02T00:49:18.368118805Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\"" Jul 2 00:49:18.371321 containerd[1351]: time="2024-07-02T00:49:18.371194394Z" level=info msg="CreateContainer within sandbox \"8ea60b804ea7cec16a41b7d40960214f566cc779889d1a7d050aeb8912c6d91b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 2 00:49:18.388521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount19929345.mount: Deactivated successfully. Jul 2 00:49:18.397941 containerd[1351]: time="2024-07-02T00:49:18.397786262Z" level=info msg="CreateContainer within sandbox \"8ea60b804ea7cec16a41b7d40960214f566cc779889d1a7d050aeb8912c6d91b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e54ee0e13e1826fcc96fb949305b7709ddb328255147b9aa4cca145e33d526c3\"" Jul 2 00:49:18.400185 containerd[1351]: time="2024-07-02T00:49:18.398457729Z" level=info msg="StartContainer for \"e54ee0e13e1826fcc96fb949305b7709ddb328255147b9aa4cca145e33d526c3\"" Jul 2 00:49:18.456959 containerd[1351]: time="2024-07-02T00:49:18.456889151Z" level=info msg="StartContainer for \"e54ee0e13e1826fcc96fb949305b7709ddb328255147b9aa4cca145e33d526c3\" returns successfully" Jul 2 00:49:19.028206 kubelet[2384]: I0702 00:49:19.027963 2384 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jul 2 00:49:19.039671 containerd[1351]: time="2024-07-02T00:49:19.039617996Z" level=info msg="shim disconnected" id=e54ee0e13e1826fcc96fb949305b7709ddb328255147b9aa4cca145e33d526c3 namespace=k8s.io Jul 2 00:49:19.039671 containerd[1351]: time="2024-07-02T00:49:19.039670201Z" level=warning msg="cleaning up after shim disconnected" id=e54ee0e13e1826fcc96fb949305b7709ddb328255147b9aa4cca145e33d526c3 namespace=k8s.io Jul 2 00:49:19.039671 containerd[1351]: time="2024-07-02T00:49:19.039678802Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:49:19.052978 kubelet[2384]: I0702 00:49:19.051104 2384 topology_manager.go:215] "Topology Admit Handler" podUID="72fe4de1-b2e6-441a-b885-9fce4e3cebac" podNamespace="calico-system" podName="calico-kube-controllers-f49f44d4b-xn8pc" Jul 2 00:49:19.053205 kubelet[2384]: I0702 00:49:19.053182 2384 topology_manager.go:215] "Topology Admit Handler" podUID="4e4dbead-dbec-4913-9f82-ae7fdc8be31c" podNamespace="kube-system" podName="coredns-5dd5756b68-b89tp" Jul 2 00:49:19.055993 kubelet[2384]: I0702 00:49:19.055972 2384 topology_manager.go:215] "Topology Admit Handler" podUID="c5486d5f-6264-40ba-9f70-21d17d9388ed" podNamespace="kube-system" podName="coredns-5dd5756b68-qwwn6" Jul 2 00:49:19.066140 kubelet[2384]: I0702 00:49:19.066114 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5486d5f-6264-40ba-9f70-21d17d9388ed-config-volume\") pod \"coredns-5dd5756b68-qwwn6\" (UID: \"c5486d5f-6264-40ba-9f70-21d17d9388ed\") " pod="kube-system/coredns-5dd5756b68-qwwn6" Jul 2 00:49:19.066402 kubelet[2384]: I0702 00:49:19.066159 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72fe4de1-b2e6-441a-b885-9fce4e3cebac-tigera-ca-bundle\") pod \"calico-kube-controllers-f49f44d4b-xn8pc\" (UID: \"72fe4de1-b2e6-441a-b885-9fce4e3cebac\") " pod="calico-system/calico-kube-controllers-f49f44d4b-xn8pc" Jul 2 00:49:19.066402 kubelet[2384]: I0702 00:49:19.066184 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78chb\" (UniqueName: \"kubernetes.io/projected/72fe4de1-b2e6-441a-b885-9fce4e3cebac-kube-api-access-78chb\") pod \"calico-kube-controllers-f49f44d4b-xn8pc\" (UID: \"72fe4de1-b2e6-441a-b885-9fce4e3cebac\") " pod="calico-system/calico-kube-controllers-f49f44d4b-xn8pc" Jul 2 00:49:19.066402 kubelet[2384]: I0702 00:49:19.066207 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwjpw\" (UniqueName: \"kubernetes.io/projected/c5486d5f-6264-40ba-9f70-21d17d9388ed-kube-api-access-zwjpw\") pod \"coredns-5dd5756b68-qwwn6\" (UID: \"c5486d5f-6264-40ba-9f70-21d17d9388ed\") " pod="kube-system/coredns-5dd5756b68-qwwn6" Jul 2 00:49:19.066402 kubelet[2384]: I0702 00:49:19.066267 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrdzn\" (UniqueName: \"kubernetes.io/projected/4e4dbead-dbec-4913-9f82-ae7fdc8be31c-kube-api-access-jrdzn\") pod \"coredns-5dd5756b68-b89tp\" (UID: \"4e4dbead-dbec-4913-9f82-ae7fdc8be31c\") " pod="kube-system/coredns-5dd5756b68-b89tp" Jul 2 00:49:19.066402 kubelet[2384]: I0702 00:49:19.066291 2384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e4dbead-dbec-4913-9f82-ae7fdc8be31c-config-volume\") pod \"coredns-5dd5756b68-b89tp\" (UID: \"4e4dbead-dbec-4913-9f82-ae7fdc8be31c\") " pod="kube-system/coredns-5dd5756b68-b89tp" Jul 2 00:49:19.080387 kubelet[2384]: E0702 00:49:19.080367 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:49:19.082030 containerd[1351]: time="2024-07-02T00:49:19.081997337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jul 2 00:49:19.359937 kubelet[2384]: E0702 00:49:19.358843 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:49:19.359937 kubelet[2384]: E0702 00:49:19.359874 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:49:19.360334 containerd[1351]: time="2024-07-02T00:49:19.359218803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f49f44d4b-xn8pc,Uid:72fe4de1-b2e6-441a-b885-9fce4e3cebac,Namespace:calico-system,Attempt:0,}" Jul 2 00:49:19.360927 containerd[1351]: time="2024-07-02T00:49:19.360781074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-qwwn6,Uid:c5486d5f-6264-40ba-9f70-21d17d9388ed,Namespace:kube-system,Attempt:0,}" Jul 2 00:49:19.360927 containerd[1351]: time="2024-07-02T00:49:19.360859761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-b89tp,Uid:4e4dbead-dbec-4913-9f82-ae7fdc8be31c,Namespace:kube-system,Attempt:0,}" Jul 2 00:49:19.385282 systemd[1]: run-containerd-runc-k8s.io-e54ee0e13e1826fcc96fb949305b7709ddb328255147b9aa4cca145e33d526c3-runc.9KNHv4.mount: Deactivated successfully. Jul 2 00:49:19.385419 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e54ee0e13e1826fcc96fb949305b7709ddb328255147b9aa4cca145e33d526c3-rootfs.mount: Deactivated successfully. Jul 2 00:49:19.603394 containerd[1351]: time="2024-07-02T00:49:19.603318223Z" level=error msg="Failed to destroy network for sandbox \"1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:49:19.603579 containerd[1351]: time="2024-07-02T00:49:19.603551886Z" level=error msg="Failed to destroy network for sandbox \"ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:49:19.603890 containerd[1351]: time="2024-07-02T00:49:19.603855195Z" level=error msg="encountered an error cleaning up failed sandbox \"1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:49:19.603938 containerd[1351]: time="2024-07-02T00:49:19.603914201Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f49f44d4b-xn8pc,Uid:72fe4de1-b2e6-441a-b885-9fce4e3cebac,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:49:19.604177 containerd[1351]: time="2024-07-02T00:49:19.604146743Z" level=error msg="encountered an error cleaning up failed sandbox \"ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:49:19.604216 containerd[1351]: time="2024-07-02T00:49:19.604197628Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-b89tp,Uid:4e4dbead-dbec-4913-9f82-ae7fdc8be31c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:49:19.604490 kubelet[2384]: E0702 00:49:19.604465 2384 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:49:19.604543 kubelet[2384]: E0702 00:49:19.604525 2384 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:49:19.604587 kubelet[2384]: E0702 00:49:19.604578 2384 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-b89tp" Jul 2 00:49:19.604615 kubelet[2384]: E0702 00:49:19.604601 2384 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-b89tp" Jul 2 00:49:19.604649 kubelet[2384]: E0702 00:49:19.604528 2384 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f49f44d4b-xn8pc" Jul 2 00:49:19.604682 kubelet[2384]: E0702 00:49:19.604655 2384 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f49f44d4b-xn8pc" Jul 2 00:49:19.604682 kubelet[2384]: E0702 00:49:19.604664 2384 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-b89tp_kube-system(4e4dbead-dbec-4913-9f82-ae7fdc8be31c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-b89tp_kube-system(4e4dbead-dbec-4913-9f82-ae7fdc8be31c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-b89tp" podUID="4e4dbead-dbec-4913-9f82-ae7fdc8be31c" Jul 2 00:49:19.604749 kubelet[2384]: E0702 00:49:19.604727 2384 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-f49f44d4b-xn8pc_calico-system(72fe4de1-b2e6-441a-b885-9fce4e3cebac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-f49f44d4b-xn8pc_calico-system(72fe4de1-b2e6-441a-b885-9fce4e3cebac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f49f44d4b-xn8pc" podUID="72fe4de1-b2e6-441a-b885-9fce4e3cebac" Jul 2 00:49:19.606436 containerd[1351]: time="2024-07-02T00:49:19.606390920Z" level=error msg="Failed to destroy network for sandbox \"1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:49:19.606711 containerd[1351]: time="2024-07-02T00:49:19.606673828Z" level=error msg="encountered an error cleaning up failed sandbox \"1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:49:19.606750 containerd[1351]: time="2024-07-02T00:49:19.606716392Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-qwwn6,Uid:c5486d5f-6264-40ba-9f70-21d17d9388ed,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:49:19.606914 kubelet[2384]: E0702 00:49:19.606888 2384 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:49:19.607027 kubelet[2384]: E0702 00:49:19.606934 2384 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-qwwn6" Jul 2 00:49:19.607027 kubelet[2384]: E0702 00:49:19.606953 2384 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-qwwn6" Jul 2 00:49:19.607027 kubelet[2384]: E0702 00:49:19.607001 2384 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-qwwn6_kube-system(c5486d5f-6264-40ba-9f70-21d17d9388ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-qwwn6_kube-system(c5486d5f-6264-40ba-9f70-21d17d9388ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-qwwn6" podUID="c5486d5f-6264-40ba-9f70-21d17d9388ed" Jul 2 00:49:19.985887 containerd[1351]: time="2024-07-02T00:49:19.985795113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dczfn,Uid:565b958b-5d9c-4fe5-96de-2157ed8f17c7,Namespace:calico-system,Attempt:0,}" Jul 2 00:49:20.065101 containerd[1351]: time="2024-07-02T00:49:20.065040609Z" level=error msg="Failed to destroy network for sandbox \"c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:49:20.065419 containerd[1351]: time="2024-07-02T00:49:20.065380841Z" level=error msg="encountered an error cleaning up failed sandbox \"c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:49:20.065467 containerd[1351]: time="2024-07-02T00:49:20.065436806Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dczfn,Uid:565b958b-5d9c-4fe5-96de-2157ed8f17c7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:49:20.065707 kubelet[2384]: E0702 00:49:20.065664 2384 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:49:20.067456 kubelet[2384]: E0702 00:49:20.065726 2384 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dczfn" Jul 2 00:49:20.067456 kubelet[2384]: E0702 00:49:20.065747 2384 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dczfn" Jul 2 00:49:20.067456 kubelet[2384]: E0702 00:49:20.065796 2384 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dczfn_calico-system(565b958b-5d9c-4fe5-96de-2157ed8f17c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dczfn_calico-system(565b958b-5d9c-4fe5-96de-2157ed8f17c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dczfn" podUID="565b958b-5d9c-4fe5-96de-2157ed8f17c7" Jul 2 00:49:20.082626 kubelet[2384]: I0702 00:49:20.082396 2384 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05" Jul 2 00:49:20.083978 containerd[1351]: time="2024-07-02T00:49:20.083257511Z" level=info msg="StopPodSandbox for \"c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05\"" Jul 2 00:49:20.084070 kubelet[2384]: I0702 00:49:20.083801 2384 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090" Jul 2 00:49:20.084848 containerd[1351]: time="2024-07-02T00:49:20.084223801Z" level=info msg="StopPodSandbox for \"1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090\"" Jul 2 00:49:20.085076 kubelet[2384]: I0702 00:49:20.085057 2384 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a" Jul 2 00:49:20.085495 containerd[1351]: time="2024-07-02T00:49:20.085467237Z" level=info msg="StopPodSandbox for \"1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a\"" Jul 2 00:49:20.086713 containerd[1351]: time="2024-07-02T00:49:20.086127859Z" level=info msg="Ensure that sandbox 1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090 in task-service has been cleanup successfully" Jul 2 00:49:20.086713 containerd[1351]: time="2024-07-02T00:49:20.086426847Z" level=info msg="Ensure that sandbox 1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a in task-service has been cleanup successfully" Jul 2 00:49:20.101349 containerd[1351]: time="2024-07-02T00:49:20.101297756Z" level=info msg="Ensure that sandbox c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05 in task-service has been cleanup successfully" Jul 2 00:49:20.104127 kubelet[2384]: I0702 00:49:20.102836 2384 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c" Jul 2 00:49:20.104216 containerd[1351]: time="2024-07-02T00:49:20.103589490Z" level=info msg="StopPodSandbox for \"ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c\"" Jul 2 00:49:20.104216 containerd[1351]: time="2024-07-02T00:49:20.103773948Z" level=info msg="Ensure that sandbox ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c in task-service has been cleanup successfully" Jul 2 00:49:20.143965 containerd[1351]: time="2024-07-02T00:49:20.143908577Z" level=error msg="StopPodSandbox for \"ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c\" failed" error="failed to destroy network for sandbox \"ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:49:20.147599 kubelet[2384]: E0702 00:49:20.147561 2384 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c" Jul 2 00:49:20.147740 kubelet[2384]: E0702 00:49:20.147657 2384 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c"} Jul 2 00:49:20.147740 kubelet[2384]: E0702 00:49:20.147699 2384 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4e4dbead-dbec-4913-9f82-ae7fdc8be31c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:49:20.147740 kubelet[2384]: E0702 00:49:20.147731 2384 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4e4dbead-dbec-4913-9f82-ae7fdc8be31c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-b89tp" podUID="4e4dbead-dbec-4913-9f82-ae7fdc8be31c" Jul 2 00:49:20.154409 containerd[1351]: time="2024-07-02T00:49:20.154341512Z" level=error msg="StopPodSandbox for \"c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05\" failed" error="failed to destroy network for sandbox \"c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:49:20.154725 kubelet[2384]: E0702 00:49:20.154662 2384 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05" Jul 2 00:49:20.154725 kubelet[2384]: E0702 00:49:20.154719 2384 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05"} Jul 2 00:49:20.154835 kubelet[2384]: E0702 00:49:20.154758 2384 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"565b958b-5d9c-4fe5-96de-2157ed8f17c7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:49:20.154835 kubelet[2384]: E0702 00:49:20.154788 2384 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"565b958b-5d9c-4fe5-96de-2157ed8f17c7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dczfn" podUID="565b958b-5d9c-4fe5-96de-2157ed8f17c7" Jul 2 00:49:20.161180 containerd[1351]: time="2024-07-02T00:49:20.161120505Z" level=error msg="StopPodSandbox for \"1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090\" failed" error="failed to destroy network for sandbox \"1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:49:20.161449 kubelet[2384]: E0702 00:49:20.161417 2384 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090" Jul 2 00:49:20.161515 kubelet[2384]: E0702 00:49:20.161462 2384 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090"} Jul 2 00:49:20.161515 kubelet[2384]: E0702 00:49:20.161500 2384 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"72fe4de1-b2e6-441a-b885-9fce4e3cebac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:49:20.161598 kubelet[2384]: E0702 00:49:20.161527 2384 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"72fe4de1-b2e6-441a-b885-9fce4e3cebac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f49f44d4b-xn8pc" podUID="72fe4de1-b2e6-441a-b885-9fce4e3cebac" Jul 2 00:49:20.171954 containerd[1351]: time="2024-07-02T00:49:20.171871430Z" level=error msg="StopPodSandbox for \"1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a\" failed" error="failed to destroy network for sandbox \"1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:49:20.172363 kubelet[2384]: E0702 00:49:20.172328 2384 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a" Jul 2 00:49:20.172427 kubelet[2384]: E0702 00:49:20.172373 2384 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a"} Jul 2 00:49:20.172427 kubelet[2384]: E0702 00:49:20.172407 2384 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c5486d5f-6264-40ba-9f70-21d17d9388ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:49:20.172519 kubelet[2384]: E0702 00:49:20.172446 2384 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c5486d5f-6264-40ba-9f70-21d17d9388ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-qwwn6" podUID="c5486d5f-6264-40ba-9f70-21d17d9388ed" Jul 2 00:49:20.384282 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a-shm.mount: Deactivated successfully. Jul 2 00:49:20.384419 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090-shm.mount: Deactivated successfully. Jul 2 00:49:20.384510 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c-shm.mount: Deactivated successfully. Jul 2 00:49:21.924812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2426238026.mount: Deactivated successfully. Jul 2 00:49:22.113447 containerd[1351]: time="2024-07-02T00:49:22.113396574Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:49:22.116373 containerd[1351]: time="2024-07-02T00:49:22.116339512Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=110491350" Jul 2 00:49:22.117933 containerd[1351]: time="2024-07-02T00:49:22.117897648Z" level=info msg="ImageCreate event name:\"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:49:22.132153 containerd[1351]: time="2024-07-02T00:49:22.132117570Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:49:22.133655 containerd[1351]: time="2024-07-02T00:49:22.133606980Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:49:22.134439 containerd[1351]: time="2024-07-02T00:49:22.134411771Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"110491212\" in 3.052189492s" Jul 2 00:49:22.134496 containerd[1351]: time="2024-07-02T00:49:22.134445053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\"" Jul 2 00:49:22.146153 containerd[1351]: time="2024-07-02T00:49:22.146120513Z" level=info msg="CreateContainer within sandbox \"8ea60b804ea7cec16a41b7d40960214f566cc779889d1a7d050aeb8912c6d91b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 2 00:49:22.159443 containerd[1351]: time="2024-07-02T00:49:22.159395873Z" level=info msg="CreateContainer within sandbox \"8ea60b804ea7cec16a41b7d40960214f566cc779889d1a7d050aeb8912c6d91b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"352b5c5995b2904e5707fd21fbdb0499d65c503bd99a4e485687a07697e7fc87\"" Jul 2 00:49:22.165676 containerd[1351]: time="2024-07-02T00:49:22.165629778Z" level=info msg="StartContainer for \"352b5c5995b2904e5707fd21fbdb0499d65c503bd99a4e485687a07697e7fc87\"" Jul 2 00:49:22.241327 containerd[1351]: time="2024-07-02T00:49:22.237900532Z" level=info msg="StartContainer for \"352b5c5995b2904e5707fd21fbdb0499d65c503bd99a4e485687a07697e7fc87\" returns successfully" Jul 2 00:49:22.412782 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 2 00:49:22.412920 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 2 00:49:23.111488 kubelet[2384]: E0702 00:49:23.111442 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:49:23.127415 kubelet[2384]: I0702 00:49:23.127385 2384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-57wdn" podStartSLOduration=1.8427917470000001 podCreationTimestamp="2024-07-02 00:49:12 +0000 UTC" firstStartedPulling="2024-07-02 00:49:12.850097967 +0000 UTC m=+20.995098097" lastFinishedPulling="2024-07-02 00:49:22.134646631 +0000 UTC m=+30.279646801" observedRunningTime="2024-07-02 00:49:23.126022059 +0000 UTC m=+31.271022229" watchObservedRunningTime="2024-07-02 00:49:23.127340451 +0000 UTC m=+31.272340621" Jul 2 00:49:23.745000 audit[3490]: AVC avc: denied { write } for pid=3490 comm="tee" name="fd" dev="proc" ino=18263 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 00:49:23.745000 audit[3490]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffcb89fa2a a2=241 a3=1b6 items=1 ppid=3462 pid=3490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:23.749968 kernel: audit: type=1400 audit(1719881363.745:283): avc: denied { write } for pid=3490 comm="tee" name="fd" dev="proc" ino=18263 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 00:49:23.750046 kernel: audit: type=1300 audit(1719881363.745:283): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffcb89fa2a a2=241 a3=1b6 items=1 ppid=3462 pid=3490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:23.745000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jul 2 00:49:23.745000 audit: PATH item=0 name="/dev/fd/63" inode=18260 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 00:49:23.756042 kernel: audit: type=1307 audit(1719881363.745:283): cwd="/etc/service/enabled/bird6/log" Jul 2 00:49:23.756113 kernel: audit: type=1302 audit(1719881363.745:283): item=0 name="/dev/fd/63" inode=18260 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 00:49:23.756131 kernel: audit: type=1327 audit(1719881363.745:283): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 00:49:23.745000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 00:49:23.761000 audit[3505]: AVC avc: denied { write } for pid=3505 comm="tee" name="fd" dev="proc" ino=19829 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 00:49:23.761000 audit[3505]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffdb47ea2a a2=241 a3=1b6 items=1 ppid=3464 pid=3505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:23.766727 kernel: audit: type=1400 audit(1719881363.761:284): avc: denied { write } for pid=3505 comm="tee" name="fd" dev="proc" ino=19829 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 00:49:23.766802 kernel: audit: type=1300 audit(1719881363.761:284): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffdb47ea2a a2=241 a3=1b6 items=1 ppid=3464 pid=3505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:23.761000 audit: CWD cwd="/etc/service/enabled/confd/log" Jul 2 00:49:23.769815 kernel: audit: type=1307 audit(1719881363.761:284): cwd="/etc/service/enabled/confd/log" Jul 2 00:49:23.761000 audit: PATH item=0 name="/dev/fd/63" inode=17218 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 00:49:23.772106 kernel: audit: type=1302 audit(1719881363.761:284): item=0 name="/dev/fd/63" inode=17218 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 00:49:23.761000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 00:49:23.773832 kernel: audit: type=1327 audit(1719881363.761:284): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 00:49:23.765000 audit[3509]: AVC avc: denied { write } for pid=3509 comm="tee" name="fd" dev="proc" ino=18275 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 00:49:23.765000 audit[3509]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffecd86a2a a2=241 a3=1b6 items=1 ppid=3459 pid=3509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:23.765000 audit: CWD cwd="/etc/service/enabled/felix/log" Jul 2 00:49:23.765000 audit: PATH item=0 name="/dev/fd/63" inode=19825 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 00:49:23.765000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 00:49:23.765000 audit[3507]: AVC avc: denied { write } for pid=3507 comm="tee" name="fd" dev="proc" ino=19833 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 00:49:23.765000 audit[3507]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff78b2a1a a2=241 a3=1b6 items=1 ppid=3468 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:23.765000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jul 2 00:49:23.765000 audit: PATH item=0 name="/dev/fd/63" inode=19824 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 00:49:23.765000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 00:49:23.775000 audit[3516]: AVC avc: denied { write } for pid=3516 comm="tee" name="fd" dev="proc" ino=17223 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 00:49:23.775000 audit[3516]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffea002a2c a2=241 a3=1b6 items=1 ppid=3455 pid=3516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:23.775000 audit: CWD cwd="/etc/service/enabled/cni/log" Jul 2 00:49:23.775000 audit: PATH item=0 name="/dev/fd/63" inode=19826 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 00:49:23.775000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 00:49:23.777000 audit[3526]: AVC avc: denied { write } for pid=3526 comm="tee" name="fd" dev="proc" ino=19169 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 00:49:23.777000 audit[3526]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffffb09ea2b a2=241 a3=1b6 items=1 ppid=3454 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:23.777000 audit: CWD cwd="/etc/service/enabled/bird/log" Jul 2 00:49:23.777000 audit: PATH item=0 name="/dev/fd/63" inode=19835 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 00:49:23.777000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 00:49:23.780000 audit[3518]: AVC avc: denied { write } for pid=3518 comm="tee" name="fd" dev="proc" ino=19840 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 00:49:23.780000 audit[3518]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffdd742a1b a2=241 a3=1b6 items=1 ppid=3460 pid=3518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:23.780000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jul 2 00:49:23.780000 audit: PATH item=0 name="/dev/fd/63" inode=19162 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 00:49:23.780000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 00:49:24.112194 kubelet[2384]: I0702 00:49:24.112098 2384 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 00:49:24.112950 kubelet[2384]: E0702 00:49:24.112932 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:49:24.536380 systemd[1]: Started sshd@7-10.0.0.149:22-10.0.0.1:34006.service - OpenSSH per-connection server daemon (10.0.0.1:34006). Jul 2 00:49:24.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.149:22-10.0.0.1:34006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:49:24.592000 audit[3546]: USER_ACCT pid=3546 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:24.592859 sshd[3546]: Accepted publickey for core from 10.0.0.1 port 34006 ssh2: RSA SHA256:Kp5j1k40JRjOvdBxgA97BAtl42hDR/cCrPEvDSx0HPk Jul 2 00:49:24.593000 audit[3546]: CRED_ACQ pid=3546 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:24.593000 audit[3546]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc52c6c70 a2=3 a3=1 items=0 ppid=1 pid=3546 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:24.593000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 00:49:24.594501 sshd[3546]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:49:24.598577 systemd-logind[1334]: New session 8 of user core. Jul 2 00:49:24.607478 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 00:49:24.614000 audit[3546]: USER_START pid=3546 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:24.616000 audit[3549]: CRED_ACQ pid=3549 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:24.801309 sshd[3546]: pam_unix(sshd:session): session closed for user core Jul 2 00:49:24.800000 audit[3546]: USER_END pid=3546 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:24.801000 audit[3546]: CRED_DISP pid=3546 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:24.804107 systemd[1]: sshd@7-10.0.0.149:22-10.0.0.1:34006.service: Deactivated successfully. Jul 2 00:49:24.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.149:22-10.0.0.1:34006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:49:24.805187 systemd-logind[1334]: Session 8 logged out. Waiting for processes to exit. Jul 2 00:49:24.805304 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 00:49:24.805929 systemd-logind[1334]: Removed session 8. Jul 2 00:49:29.833615 systemd[1]: Started sshd@8-10.0.0.149:22-10.0.0.1:34010.service - OpenSSH per-connection server daemon (10.0.0.1:34010). Jul 2 00:49:29.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.149:22-10.0.0.1:34010 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:49:29.834722 kernel: kauditd_printk_skb: 36 callbacks suppressed Jul 2 00:49:29.834778 kernel: audit: type=1130 audit(1719881369.832:299): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.149:22-10.0.0.1:34010 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:49:29.873000 audit[3688]: USER_ACCT pid=3688 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:29.874776 sshd[3688]: Accepted publickey for core from 10.0.0.1 port 34010 ssh2: RSA SHA256:Kp5j1k40JRjOvdBxgA97BAtl42hDR/cCrPEvDSx0HPk Jul 2 00:49:29.878179 sshd[3688]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:49:29.874000 audit[3688]: CRED_ACQ pid=3688 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:29.884401 kernel: audit: type=1101 audit(1719881369.873:300): pid=3688 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:29.884472 kernel: audit: type=1103 audit(1719881369.874:301): pid=3688 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:29.884505 kernel: audit: type=1006 audit(1719881369.874:302): pid=3688 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jul 2 00:49:29.885033 kernel: audit: type=1300 audit(1719881369.874:302): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc7cb0aa0 a2=3 a3=1 items=0 ppid=1 pid=3688 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:29.874000 audit[3688]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc7cb0aa0 a2=3 a3=1 items=0 ppid=1 pid=3688 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:29.885189 systemd-logind[1334]: New session 9 of user core. Jul 2 00:49:29.899555 kernel: audit: type=1327 audit(1719881369.874:302): proctitle=737368643A20636F7265205B707269765D Jul 2 00:49:29.874000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 00:49:29.899560 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 00:49:29.902000 audit[3688]: USER_START pid=3688 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:29.905000 audit[3691]: CRED_ACQ pid=3691 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:29.908857 kernel: audit: type=1105 audit(1719881369.902:303): pid=3688 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:29.908925 kernel: audit: type=1103 audit(1719881369.905:304): pid=3691 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:30.058877 sshd[3688]: pam_unix(sshd:session): session closed for user core Jul 2 00:49:30.058000 audit[3688]: USER_END pid=3688 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:30.058000 audit[3688]: CRED_DISP pid=3688 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:30.061614 systemd-logind[1334]: Session 9 logged out. Waiting for processes to exit. Jul 2 00:49:30.061726 systemd[1]: sshd@8-10.0.0.149:22-10.0.0.1:34010.service: Deactivated successfully. Jul 2 00:49:30.062547 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 00:49:30.062979 systemd-logind[1334]: Removed session 9. Jul 2 00:49:30.065029 kernel: audit: type=1106 audit(1719881370.058:305): pid=3688 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:30.065081 kernel: audit: type=1104 audit(1719881370.058:306): pid=3688 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:30.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.149:22-10.0.0.1:34010 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:49:30.237114 kubelet[2384]: I0702 00:49:30.237056 2384 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 00:49:30.237865 kubelet[2384]: E0702 00:49:30.237837 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:49:31.977498 containerd[1351]: time="2024-07-02T00:49:31.977453633Z" level=info msg="StopPodSandbox for \"1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a\"" Jul 2 00:49:31.978015 containerd[1351]: time="2024-07-02T00:49:31.977627645Z" level=info msg="StopPodSandbox for \"1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090\"" Jul 2 00:49:32.179272 containerd[1351]: 2024-07-02 00:49:32.084 [INFO][3831] k8s.go 608: Cleaning up netns ContainerID="1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090" Jul 2 00:49:32.179272 containerd[1351]: 2024-07-02 00:49:32.085 [INFO][3831] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090" iface="eth0" netns="/var/run/netns/cni-5dd585f7-8f51-2c33-4e5f-319854e25654" Jul 2 00:49:32.179272 containerd[1351]: 2024-07-02 00:49:32.085 [INFO][3831] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090" iface="eth0" netns="/var/run/netns/cni-5dd585f7-8f51-2c33-4e5f-319854e25654" Jul 2 00:49:32.179272 containerd[1351]: 2024-07-02 00:49:32.086 [INFO][3831] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090" iface="eth0" netns="/var/run/netns/cni-5dd585f7-8f51-2c33-4e5f-319854e25654" Jul 2 00:49:32.179272 containerd[1351]: 2024-07-02 00:49:32.086 [INFO][3831] k8s.go 615: Releasing IP address(es) ContainerID="1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090" Jul 2 00:49:32.179272 containerd[1351]: 2024-07-02 00:49:32.086 [INFO][3831] utils.go 188: Calico CNI releasing IP address ContainerID="1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090" Jul 2 00:49:32.179272 containerd[1351]: 2024-07-02 00:49:32.162 [INFO][3846] ipam_plugin.go 411: Releasing address using handleID ContainerID="1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090" HandleID="k8s-pod-network.1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090" Workload="localhost-k8s-calico--kube--controllers--f49f44d4b--xn8pc-eth0" Jul 2 00:49:32.179272 containerd[1351]: 2024-07-02 00:49:32.163 [INFO][3846] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:49:32.179272 containerd[1351]: 2024-07-02 00:49:32.163 [INFO][3846] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:49:32.179272 containerd[1351]: 2024-07-02 00:49:32.171 [WARNING][3846] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090" HandleID="k8s-pod-network.1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090" Workload="localhost-k8s-calico--kube--controllers--f49f44d4b--xn8pc-eth0" Jul 2 00:49:32.179272 containerd[1351]: 2024-07-02 00:49:32.172 [INFO][3846] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090" HandleID="k8s-pod-network.1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090" Workload="localhost-k8s-calico--kube--controllers--f49f44d4b--xn8pc-eth0" Jul 2 00:49:32.179272 containerd[1351]: 2024-07-02 00:49:32.173 [INFO][3846] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:49:32.179272 containerd[1351]: 2024-07-02 00:49:32.175 [INFO][3831] k8s.go 621: Teardown processing complete. ContainerID="1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090" Jul 2 00:49:32.179272 containerd[1351]: time="2024-07-02T00:49:32.177165900Z" level=info msg="TearDown network for sandbox \"1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090\" successfully" Jul 2 00:49:32.179272 containerd[1351]: time="2024-07-02T00:49:32.177202822Z" level=info msg="StopPodSandbox for \"1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090\" returns successfully" Jul 2 00:49:32.179866 containerd[1351]: time="2024-07-02T00:49:32.179826157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f49f44d4b-xn8pc,Uid:72fe4de1-b2e6-441a-b885-9fce4e3cebac,Namespace:calico-system,Attempt:1,}" Jul 2 00:49:32.183778 systemd[1]: run-netns-cni\x2d5dd585f7\x2d8f51\x2d2c33\x2d4e5f\x2d319854e25654.mount: Deactivated successfully. Jul 2 00:49:32.190444 containerd[1351]: 2024-07-02 00:49:32.088 [INFO][3830] k8s.go 608: Cleaning up netns ContainerID="1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a" Jul 2 00:49:32.190444 containerd[1351]: 2024-07-02 00:49:32.089 [INFO][3830] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a" iface="eth0" netns="/var/run/netns/cni-cf165874-c1f1-e2d1-2d77-5fc24931c2c5" Jul 2 00:49:32.190444 containerd[1351]: 2024-07-02 00:49:32.089 [INFO][3830] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a" iface="eth0" netns="/var/run/netns/cni-cf165874-c1f1-e2d1-2d77-5fc24931c2c5" Jul 2 00:49:32.190444 containerd[1351]: 2024-07-02 00:49:32.089 [INFO][3830] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a" iface="eth0" netns="/var/run/netns/cni-cf165874-c1f1-e2d1-2d77-5fc24931c2c5" Jul 2 00:49:32.190444 containerd[1351]: 2024-07-02 00:49:32.089 [INFO][3830] k8s.go 615: Releasing IP address(es) ContainerID="1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a" Jul 2 00:49:32.190444 containerd[1351]: 2024-07-02 00:49:32.089 [INFO][3830] utils.go 188: Calico CNI releasing IP address ContainerID="1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a" Jul 2 00:49:32.190444 containerd[1351]: 2024-07-02 00:49:32.162 [INFO][3847] ipam_plugin.go 411: Releasing address using handleID ContainerID="1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a" HandleID="k8s-pod-network.1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a" Workload="localhost-k8s-coredns--5dd5756b68--qwwn6-eth0" Jul 2 00:49:32.190444 containerd[1351]: 2024-07-02 00:49:32.163 [INFO][3847] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:49:32.190444 containerd[1351]: 2024-07-02 00:49:32.173 [INFO][3847] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:49:32.190444 containerd[1351]: 2024-07-02 00:49:32.183 [WARNING][3847] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a" HandleID="k8s-pod-network.1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a" Workload="localhost-k8s-coredns--5dd5756b68--qwwn6-eth0" Jul 2 00:49:32.190444 containerd[1351]: 2024-07-02 00:49:32.184 [INFO][3847] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a" HandleID="k8s-pod-network.1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a" Workload="localhost-k8s-coredns--5dd5756b68--qwwn6-eth0" Jul 2 00:49:32.190444 containerd[1351]: 2024-07-02 00:49:32.186 [INFO][3847] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:49:32.190444 containerd[1351]: 2024-07-02 00:49:32.188 [INFO][3830] k8s.go 621: Teardown processing complete. ContainerID="1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a" Jul 2 00:49:32.192697 systemd[1]: run-netns-cni\x2dcf165874\x2dc1f1\x2de2d1\x2d2d77\x2d5fc24931c2c5.mount: Deactivated successfully. Jul 2 00:49:32.192843 containerd[1351]: time="2024-07-02T00:49:32.192798499Z" level=info msg="TearDown network for sandbox \"1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a\" successfully" Jul 2 00:49:32.192843 containerd[1351]: time="2024-07-02T00:49:32.192838981Z" level=info msg="StopPodSandbox for \"1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a\" returns successfully" Jul 2 00:49:32.193497 kubelet[2384]: E0702 00:49:32.193469 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:49:32.196232 containerd[1351]: time="2024-07-02T00:49:32.195137694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-qwwn6,Uid:c5486d5f-6264-40ba-9f70-21d17d9388ed,Namespace:kube-system,Attempt:1,}" Jul 2 00:49:32.347382 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 00:49:32.347497 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali1f8645b4ce9: link becomes ready Jul 2 00:49:32.346223 systemd-networkd[1144]: cali1f8645b4ce9: Link UP Jul 2 00:49:32.347952 kubelet[2384]: I0702 00:49:32.347524 2384 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 00:49:32.347952 kubelet[2384]: E0702 00:49:32.348290 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:49:32.347185 systemd-networkd[1144]: cali1f8645b4ce9: Gained carrier Jul 2 00:49:32.374087 containerd[1351]: 2024-07-02 00:49:32.228 [INFO][3863] utils.go 100: File /var/lib/calico/mtu does not exist Jul 2 00:49:32.374087 containerd[1351]: 2024-07-02 00:49:32.244 [INFO][3863] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--qwwn6-eth0 coredns-5dd5756b68- kube-system c5486d5f-6264-40ba-9f70-21d17d9388ed 752 0 2024-07-02 00:49:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-5dd5756b68-qwwn6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1f8645b4ce9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="298468d3f9bdcbb99ba4d52fc1af55fc94c558a613bd1317358a90d63a6899e8" Namespace="kube-system" Pod="coredns-5dd5756b68-qwwn6" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--qwwn6-" Jul 2 00:49:32.374087 containerd[1351]: 2024-07-02 00:49:32.244 [INFO][3863] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="298468d3f9bdcbb99ba4d52fc1af55fc94c558a613bd1317358a90d63a6899e8" Namespace="kube-system" Pod="coredns-5dd5756b68-qwwn6" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--qwwn6-eth0" Jul 2 00:49:32.374087 containerd[1351]: 2024-07-02 00:49:32.278 [INFO][3898] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="298468d3f9bdcbb99ba4d52fc1af55fc94c558a613bd1317358a90d63a6899e8" HandleID="k8s-pod-network.298468d3f9bdcbb99ba4d52fc1af55fc94c558a613bd1317358a90d63a6899e8" Workload="localhost-k8s-coredns--5dd5756b68--qwwn6-eth0" Jul 2 00:49:32.374087 containerd[1351]: 2024-07-02 00:49:32.293 [INFO][3898] ipam_plugin.go 264: Auto assigning IP ContainerID="298468d3f9bdcbb99ba4d52fc1af55fc94c558a613bd1317358a90d63a6899e8" HandleID="k8s-pod-network.298468d3f9bdcbb99ba4d52fc1af55fc94c558a613bd1317358a90d63a6899e8" Workload="localhost-k8s-coredns--5dd5756b68--qwwn6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005a9d70), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-qwwn6", "timestamp":"2024-07-02 00:49:32.278330902 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:49:32.374087 containerd[1351]: 2024-07-02 00:49:32.293 [INFO][3898] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:49:32.374087 containerd[1351]: 2024-07-02 00:49:32.293 [INFO][3898] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:49:32.374087 containerd[1351]: 2024-07-02 00:49:32.293 [INFO][3898] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 00:49:32.374087 containerd[1351]: 2024-07-02 00:49:32.295 [INFO][3898] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.298468d3f9bdcbb99ba4d52fc1af55fc94c558a613bd1317358a90d63a6899e8" host="localhost" Jul 2 00:49:32.374087 containerd[1351]: 2024-07-02 00:49:32.299 [INFO][3898] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 00:49:32.374087 containerd[1351]: 2024-07-02 00:49:32.303 [INFO][3898] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 00:49:32.374087 containerd[1351]: 2024-07-02 00:49:32.305 [INFO][3898] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 00:49:32.374087 containerd[1351]: 2024-07-02 00:49:32.307 [INFO][3898] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 00:49:32.374087 containerd[1351]: 2024-07-02 00:49:32.307 [INFO][3898] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.298468d3f9bdcbb99ba4d52fc1af55fc94c558a613bd1317358a90d63a6899e8" host="localhost" Jul 2 00:49:32.374087 containerd[1351]: 2024-07-02 00:49:32.309 [INFO][3898] ipam.go 1685: Creating new handle: k8s-pod-network.298468d3f9bdcbb99ba4d52fc1af55fc94c558a613bd1317358a90d63a6899e8 Jul 2 00:49:32.374087 containerd[1351]: 2024-07-02 00:49:32.316 [INFO][3898] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.298468d3f9bdcbb99ba4d52fc1af55fc94c558a613bd1317358a90d63a6899e8" host="localhost" Jul 2 00:49:32.374087 containerd[1351]: 2024-07-02 00:49:32.320 [INFO][3898] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.298468d3f9bdcbb99ba4d52fc1af55fc94c558a613bd1317358a90d63a6899e8" host="localhost" Jul 2 00:49:32.374087 containerd[1351]: 2024-07-02 00:49:32.320 [INFO][3898] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.298468d3f9bdcbb99ba4d52fc1af55fc94c558a613bd1317358a90d63a6899e8" host="localhost" Jul 2 00:49:32.374087 containerd[1351]: 2024-07-02 00:49:32.320 [INFO][3898] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:49:32.374087 containerd[1351]: 2024-07-02 00:49:32.320 [INFO][3898] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="298468d3f9bdcbb99ba4d52fc1af55fc94c558a613bd1317358a90d63a6899e8" HandleID="k8s-pod-network.298468d3f9bdcbb99ba4d52fc1af55fc94c558a613bd1317358a90d63a6899e8" Workload="localhost-k8s-coredns--5dd5756b68--qwwn6-eth0" Jul 2 00:49:32.375011 containerd[1351]: 2024-07-02 00:49:32.322 [INFO][3863] k8s.go 386: Populated endpoint ContainerID="298468d3f9bdcbb99ba4d52fc1af55fc94c558a613bd1317358a90d63a6899e8" Namespace="kube-system" Pod="coredns-5dd5756b68-qwwn6" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--qwwn6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--qwwn6-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"c5486d5f-6264-40ba-9f70-21d17d9388ed", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 49, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-qwwn6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1f8645b4ce9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:49:32.375011 containerd[1351]: 2024-07-02 00:49:32.322 [INFO][3863] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="298468d3f9bdcbb99ba4d52fc1af55fc94c558a613bd1317358a90d63a6899e8" Namespace="kube-system" Pod="coredns-5dd5756b68-qwwn6" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--qwwn6-eth0" Jul 2 00:49:32.375011 containerd[1351]: 2024-07-02 00:49:32.322 [INFO][3863] dataplane_linux.go 68: Setting the host side veth name to cali1f8645b4ce9 ContainerID="298468d3f9bdcbb99ba4d52fc1af55fc94c558a613bd1317358a90d63a6899e8" Namespace="kube-system" Pod="coredns-5dd5756b68-qwwn6" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--qwwn6-eth0" Jul 2 00:49:32.375011 containerd[1351]: 2024-07-02 00:49:32.349 [INFO][3863] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="298468d3f9bdcbb99ba4d52fc1af55fc94c558a613bd1317358a90d63a6899e8" Namespace="kube-system" Pod="coredns-5dd5756b68-qwwn6" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--qwwn6-eth0" Jul 2 00:49:32.375011 containerd[1351]: 2024-07-02 00:49:32.349 [INFO][3863] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="298468d3f9bdcbb99ba4d52fc1af55fc94c558a613bd1317358a90d63a6899e8" Namespace="kube-system" Pod="coredns-5dd5756b68-qwwn6" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--qwwn6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--qwwn6-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"c5486d5f-6264-40ba-9f70-21d17d9388ed", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 49, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"298468d3f9bdcbb99ba4d52fc1af55fc94c558a613bd1317358a90d63a6899e8", Pod:"coredns-5dd5756b68-qwwn6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1f8645b4ce9", MAC:"3e:95:72:ef:40:0f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:49:32.375011 containerd[1351]: 2024-07-02 00:49:32.366 [INFO][3863] k8s.go 500: Wrote updated endpoint to datastore ContainerID="298468d3f9bdcbb99ba4d52fc1af55fc94c558a613bd1317358a90d63a6899e8" Namespace="kube-system" Pod="coredns-5dd5756b68-qwwn6" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--qwwn6-eth0" Jul 2 00:49:32.392260 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali1428750659c: link becomes ready Jul 2 00:49:32.392446 systemd-networkd[1144]: cali1428750659c: Link UP Jul 2 00:49:32.392657 systemd-networkd[1144]: cali1428750659c: Gained carrier Jul 2 00:49:32.403000 audit[3946]: NETFILTER_CFG table=filter:97 family=2 entries=15 op=nft_register_rule pid=3946 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 00:49:32.403000 audit[3946]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=ffffce9f3100 a2=0 a3=1 items=0 ppid=2587 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:32.403000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 00:49:32.405892 containerd[1351]: 2024-07-02 00:49:32.227 [INFO][3862] utils.go 100: File /var/lib/calico/mtu does not exist Jul 2 00:49:32.405892 containerd[1351]: 2024-07-02 00:49:32.244 [INFO][3862] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--f49f44d4b--xn8pc-eth0 calico-kube-controllers-f49f44d4b- calico-system 72fe4de1-b2e6-441a-b885-9fce4e3cebac 751 0 2024-07-02 00:49:12 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:f49f44d4b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-f49f44d4b-xn8pc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1428750659c [] []}} ContainerID="8b638965aa730f99f3ec7c0b8f17ea443872f5f62b97cbfa56be4835a9e21ce6" Namespace="calico-system" Pod="calico-kube-controllers-f49f44d4b-xn8pc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f49f44d4b--xn8pc-" Jul 2 00:49:32.405892 containerd[1351]: 2024-07-02 00:49:32.244 [INFO][3862] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8b638965aa730f99f3ec7c0b8f17ea443872f5f62b97cbfa56be4835a9e21ce6" Namespace="calico-system" Pod="calico-kube-controllers-f49f44d4b-xn8pc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f49f44d4b--xn8pc-eth0" Jul 2 00:49:32.405892 containerd[1351]: 2024-07-02 00:49:32.281 [INFO][3897] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8b638965aa730f99f3ec7c0b8f17ea443872f5f62b97cbfa56be4835a9e21ce6" HandleID="k8s-pod-network.8b638965aa730f99f3ec7c0b8f17ea443872f5f62b97cbfa56be4835a9e21ce6" Workload="localhost-k8s-calico--kube--controllers--f49f44d4b--xn8pc-eth0" Jul 2 00:49:32.405892 containerd[1351]: 2024-07-02 00:49:32.293 [INFO][3897] ipam_plugin.go 264: Auto assigning IP ContainerID="8b638965aa730f99f3ec7c0b8f17ea443872f5f62b97cbfa56be4835a9e21ce6" HandleID="k8s-pod-network.8b638965aa730f99f3ec7c0b8f17ea443872f5f62b97cbfa56be4835a9e21ce6" Workload="localhost-k8s-calico--kube--controllers--f49f44d4b--xn8pc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400057b210), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-f49f44d4b-xn8pc", "timestamp":"2024-07-02 00:49:32.281836895 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:49:32.405892 containerd[1351]: 2024-07-02 00:49:32.293 [INFO][3897] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:49:32.405892 containerd[1351]: 2024-07-02 00:49:32.320 [INFO][3897] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:49:32.405892 containerd[1351]: 2024-07-02 00:49:32.321 [INFO][3897] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 00:49:32.405892 containerd[1351]: 2024-07-02 00:49:32.323 [INFO][3897] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8b638965aa730f99f3ec7c0b8f17ea443872f5f62b97cbfa56be4835a9e21ce6" host="localhost" Jul 2 00:49:32.405892 containerd[1351]: 2024-07-02 00:49:32.328 [INFO][3897] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 00:49:32.405892 containerd[1351]: 2024-07-02 00:49:32.337 [INFO][3897] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 00:49:32.405892 containerd[1351]: 2024-07-02 00:49:32.339 [INFO][3897] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 00:49:32.405892 containerd[1351]: 2024-07-02 00:49:32.357 [INFO][3897] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 00:49:32.405892 containerd[1351]: 2024-07-02 00:49:32.358 [INFO][3897] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8b638965aa730f99f3ec7c0b8f17ea443872f5f62b97cbfa56be4835a9e21ce6" host="localhost" Jul 2 00:49:32.405892 containerd[1351]: 2024-07-02 00:49:32.369 [INFO][3897] ipam.go 1685: Creating new handle: k8s-pod-network.8b638965aa730f99f3ec7c0b8f17ea443872f5f62b97cbfa56be4835a9e21ce6 Jul 2 00:49:32.405892 containerd[1351]: 2024-07-02 00:49:32.374 [INFO][3897] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8b638965aa730f99f3ec7c0b8f17ea443872f5f62b97cbfa56be4835a9e21ce6" host="localhost" Jul 2 00:49:32.405892 containerd[1351]: 2024-07-02 00:49:32.379 [INFO][3897] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.8b638965aa730f99f3ec7c0b8f17ea443872f5f62b97cbfa56be4835a9e21ce6" host="localhost" Jul 2 00:49:32.405892 containerd[1351]: 2024-07-02 00:49:32.379 [INFO][3897] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.8b638965aa730f99f3ec7c0b8f17ea443872f5f62b97cbfa56be4835a9e21ce6" host="localhost" Jul 2 00:49:32.405892 containerd[1351]: 2024-07-02 00:49:32.379 [INFO][3897] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:49:32.405892 containerd[1351]: 2024-07-02 00:49:32.379 [INFO][3897] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="8b638965aa730f99f3ec7c0b8f17ea443872f5f62b97cbfa56be4835a9e21ce6" HandleID="k8s-pod-network.8b638965aa730f99f3ec7c0b8f17ea443872f5f62b97cbfa56be4835a9e21ce6" Workload="localhost-k8s-calico--kube--controllers--f49f44d4b--xn8pc-eth0" Jul 2 00:49:32.406512 containerd[1351]: 2024-07-02 00:49:32.386 [INFO][3862] k8s.go 386: Populated endpoint ContainerID="8b638965aa730f99f3ec7c0b8f17ea443872f5f62b97cbfa56be4835a9e21ce6" Namespace="calico-system" Pod="calico-kube-controllers-f49f44d4b-xn8pc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f49f44d4b--xn8pc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--f49f44d4b--xn8pc-eth0", GenerateName:"calico-kube-controllers-f49f44d4b-", Namespace:"calico-system", SelfLink:"", UID:"72fe4de1-b2e6-441a-b885-9fce4e3cebac", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 49, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f49f44d4b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-f49f44d4b-xn8pc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1428750659c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:49:32.406512 containerd[1351]: 2024-07-02 00:49:32.386 [INFO][3862] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="8b638965aa730f99f3ec7c0b8f17ea443872f5f62b97cbfa56be4835a9e21ce6" Namespace="calico-system" Pod="calico-kube-controllers-f49f44d4b-xn8pc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f49f44d4b--xn8pc-eth0" Jul 2 00:49:32.406512 containerd[1351]: 2024-07-02 00:49:32.386 [INFO][3862] dataplane_linux.go 68: Setting the host side veth name to cali1428750659c ContainerID="8b638965aa730f99f3ec7c0b8f17ea443872f5f62b97cbfa56be4835a9e21ce6" Namespace="calico-system" Pod="calico-kube-controllers-f49f44d4b-xn8pc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f49f44d4b--xn8pc-eth0" Jul 2 00:49:32.406512 containerd[1351]: 2024-07-02 00:49:32.392 [INFO][3862] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="8b638965aa730f99f3ec7c0b8f17ea443872f5f62b97cbfa56be4835a9e21ce6" Namespace="calico-system" Pod="calico-kube-controllers-f49f44d4b-xn8pc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f49f44d4b--xn8pc-eth0" Jul 2 00:49:32.406512 containerd[1351]: 2024-07-02 00:49:32.392 [INFO][3862] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8b638965aa730f99f3ec7c0b8f17ea443872f5f62b97cbfa56be4835a9e21ce6" Namespace="calico-system" Pod="calico-kube-controllers-f49f44d4b-xn8pc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f49f44d4b--xn8pc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--f49f44d4b--xn8pc-eth0", GenerateName:"calico-kube-controllers-f49f44d4b-", Namespace:"calico-system", SelfLink:"", UID:"72fe4de1-b2e6-441a-b885-9fce4e3cebac", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 49, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f49f44d4b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8b638965aa730f99f3ec7c0b8f17ea443872f5f62b97cbfa56be4835a9e21ce6", Pod:"calico-kube-controllers-f49f44d4b-xn8pc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1428750659c", MAC:"ee:75:05:a1:77:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:49:32.406512 containerd[1351]: 2024-07-02 00:49:32.404 [INFO][3862] k8s.go 500: Wrote updated endpoint to datastore ContainerID="8b638965aa730f99f3ec7c0b8f17ea443872f5f62b97cbfa56be4835a9e21ce6" Namespace="calico-system" Pod="calico-kube-controllers-f49f44d4b-xn8pc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f49f44d4b--xn8pc-eth0" Jul 2 00:49:32.406000 audit[3946]: NETFILTER_CFG table=nat:98 family=2 entries=19 op=nft_register_chain pid=3946 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 00:49:32.406000 audit[3946]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffce9f3100 a2=0 a3=1 items=0 ppid=2587 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:32.406000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 00:49:32.437998 containerd[1351]: time="2024-07-02T00:49:32.437773057Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:49:32.437998 containerd[1351]: time="2024-07-02T00:49:32.437825140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:49:32.437998 containerd[1351]: time="2024-07-02T00:49:32.437840741Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:49:32.437998 containerd[1351]: time="2024-07-02T00:49:32.437851462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:49:32.439818 containerd[1351]: time="2024-07-02T00:49:32.439712386Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:49:32.440026 containerd[1351]: time="2024-07-02T00:49:32.439788191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:49:32.440140 containerd[1351]: time="2024-07-02T00:49:32.440099291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:49:32.440337 containerd[1351]: time="2024-07-02T00:49:32.440124373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:49:32.465164 systemd-resolved[1294]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:49:32.466848 systemd-resolved[1294]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:49:32.487081 containerd[1351]: time="2024-07-02T00:49:32.487041250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-qwwn6,Uid:c5486d5f-6264-40ba-9f70-21d17d9388ed,Namespace:kube-system,Attempt:1,} returns sandbox id \"298468d3f9bdcbb99ba4d52fc1af55fc94c558a613bd1317358a90d63a6899e8\"" Jul 2 00:49:32.488127 containerd[1351]: time="2024-07-02T00:49:32.488091360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f49f44d4b-xn8pc,Uid:72fe4de1-b2e6-441a-b885-9fce4e3cebac,Namespace:calico-system,Attempt:1,} returns sandbox id \"8b638965aa730f99f3ec7c0b8f17ea443872f5f62b97cbfa56be4835a9e21ce6\"" Jul 2 00:49:32.488219 kubelet[2384]: E0702 00:49:32.488143 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:49:32.490659 containerd[1351]: time="2024-07-02T00:49:32.490612688Z" level=info msg="CreateContainer within sandbox \"298468d3f9bdcbb99ba4d52fc1af55fc94c558a613bd1317358a90d63a6899e8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:49:32.493169 containerd[1351]: time="2024-07-02T00:49:32.493139456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jul 2 00:49:32.505399 containerd[1351]: time="2024-07-02T00:49:32.505362388Z" level=info msg="CreateContainer within sandbox \"298468d3f9bdcbb99ba4d52fc1af55fc94c558a613bd1317358a90d63a6899e8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b4448db39332458432eceae2b03c528175a292e1810b2943576593f028570c7a\"" Jul 2 00:49:32.505841 containerd[1351]: time="2024-07-02T00:49:32.505807617Z" level=info msg="StartContainer for \"b4448db39332458432eceae2b03c528175a292e1810b2943576593f028570c7a\"" Jul 2 00:49:32.551394 containerd[1351]: time="2024-07-02T00:49:32.551348884Z" level=info msg="StartContainer for \"b4448db39332458432eceae2b03c528175a292e1810b2943576593f028570c7a\" returns successfully" Jul 2 00:49:32.978746 containerd[1351]: time="2024-07-02T00:49:32.978660478Z" level=info msg="StopPodSandbox for \"c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05\"" Jul 2 00:49:33.064813 containerd[1351]: 2024-07-02 00:49:33.018 [INFO][4120] k8s.go 608: Cleaning up netns ContainerID="c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05" Jul 2 00:49:33.064813 containerd[1351]: 2024-07-02 00:49:33.019 [INFO][4120] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05" iface="eth0" netns="/var/run/netns/cni-8a16a6ce-6bb1-9ceb-7bf4-65d8eab2d99b" Jul 2 00:49:33.064813 containerd[1351]: 2024-07-02 00:49:33.019 [INFO][4120] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05" iface="eth0" netns="/var/run/netns/cni-8a16a6ce-6bb1-9ceb-7bf4-65d8eab2d99b" Jul 2 00:49:33.064813 containerd[1351]: 2024-07-02 00:49:33.019 [INFO][4120] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05" iface="eth0" netns="/var/run/netns/cni-8a16a6ce-6bb1-9ceb-7bf4-65d8eab2d99b" Jul 2 00:49:33.064813 containerd[1351]: 2024-07-02 00:49:33.019 [INFO][4120] k8s.go 615: Releasing IP address(es) ContainerID="c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05" Jul 2 00:49:33.064813 containerd[1351]: 2024-07-02 00:49:33.019 [INFO][4120] utils.go 188: Calico CNI releasing IP address ContainerID="c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05" Jul 2 00:49:33.064813 containerd[1351]: 2024-07-02 00:49:33.044 [INFO][4127] ipam_plugin.go 411: Releasing address using handleID ContainerID="c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05" HandleID="k8s-pod-network.c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05" Workload="localhost-k8s-csi--node--driver--dczfn-eth0" Jul 2 00:49:33.064813 containerd[1351]: 2024-07-02 00:49:33.045 [INFO][4127] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:49:33.064813 containerd[1351]: 2024-07-02 00:49:33.045 [INFO][4127] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:49:33.064813 containerd[1351]: 2024-07-02 00:49:33.055 [WARNING][4127] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05" HandleID="k8s-pod-network.c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05" Workload="localhost-k8s-csi--node--driver--dczfn-eth0" Jul 2 00:49:33.064813 containerd[1351]: 2024-07-02 00:49:33.055 [INFO][4127] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05" HandleID="k8s-pod-network.c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05" Workload="localhost-k8s-csi--node--driver--dczfn-eth0" Jul 2 00:49:33.064813 containerd[1351]: 2024-07-02 00:49:33.057 [INFO][4127] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:49:33.064813 containerd[1351]: 2024-07-02 00:49:33.061 [INFO][4120] k8s.go 621: Teardown processing complete. ContainerID="c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05" Jul 2 00:49:33.065504 containerd[1351]: time="2024-07-02T00:49:33.065003283Z" level=info msg="TearDown network for sandbox \"c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05\" successfully" Jul 2 00:49:33.065504 containerd[1351]: time="2024-07-02T00:49:33.065034685Z" level=info msg="StopPodSandbox for \"c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05\" returns successfully" Jul 2 00:49:33.066182 containerd[1351]: time="2024-07-02T00:49:33.065581801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dczfn,Uid:565b958b-5d9c-4fe5-96de-2157ed8f17c7,Namespace:calico-system,Attempt:1,}" Jul 2 00:49:33.147917 kubelet[2384]: E0702 00:49:33.147872 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:49:33.148840 kubelet[2384]: E0702 00:49:33.148823 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:49:33.175274 kubelet[2384]: I0702 00:49:33.174370 2384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-qwwn6" podStartSLOduration=26.17433231 podCreationTimestamp="2024-07-02 00:49:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:49:33.163751742 +0000 UTC m=+41.308752032" watchObservedRunningTime="2024-07-02 00:49:33.17433231 +0000 UTC m=+41.319332480" Jul 2 00:49:33.178000 audit[4168]: NETFILTER_CFG table=filter:99 family=2 entries=14 op=nft_register_rule pid=4168 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 00:49:33.178000 audit[4168]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=fffff8c0c320 a2=0 a3=1 items=0 ppid=2587 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:33.178000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 00:49:33.186532 systemd[1]: run-netns-cni\x2d8a16a6ce\x2d6bb1\x2d9ceb\x2d7bf4\x2d65d8eab2d99b.mount: Deactivated successfully. Jul 2 00:49:33.180000 audit[4168]: NETFILTER_CFG table=nat:100 family=2 entries=14 op=nft_register_rule pid=4168 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 00:49:33.180000 audit[4168]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=fffff8c0c320 a2=0 a3=1 items=0 ppid=2587 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:33.180000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 00:49:33.227433 systemd-networkd[1144]: cali5977f9ce63e: Link UP Jul 2 00:49:33.228748 systemd-networkd[1144]: cali5977f9ce63e: Gained carrier Jul 2 00:49:33.229270 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali5977f9ce63e: link becomes ready Jul 2 00:49:33.244848 containerd[1351]: 2024-07-02 00:49:33.100 [INFO][4141] utils.go 100: File /var/lib/calico/mtu does not exist Jul 2 00:49:33.244848 containerd[1351]: 2024-07-02 00:49:33.115 [INFO][4141] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--dczfn-eth0 csi-node-driver- calico-system 565b958b-5d9c-4fe5-96de-2157ed8f17c7 779 0 2024-07-02 00:49:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-dczfn eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali5977f9ce63e [] []}} ContainerID="df84471630a341de13b454ddb9b009e77d7ec326b9b967af793a816f3cfa8694" Namespace="calico-system" Pod="csi-node-driver-dczfn" WorkloadEndpoint="localhost-k8s-csi--node--driver--dczfn-" Jul 2 00:49:33.244848 containerd[1351]: 2024-07-02 00:49:33.115 [INFO][4141] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="df84471630a341de13b454ddb9b009e77d7ec326b9b967af793a816f3cfa8694" Namespace="calico-system" Pod="csi-node-driver-dczfn" WorkloadEndpoint="localhost-k8s-csi--node--driver--dczfn-eth0" Jul 2 00:49:33.244848 containerd[1351]: 2024-07-02 00:49:33.166 [INFO][4156] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df84471630a341de13b454ddb9b009e77d7ec326b9b967af793a816f3cfa8694" HandleID="k8s-pod-network.df84471630a341de13b454ddb9b009e77d7ec326b9b967af793a816f3cfa8694" Workload="localhost-k8s-csi--node--driver--dczfn-eth0" Jul 2 00:49:33.244848 containerd[1351]: 2024-07-02 00:49:33.194 [INFO][4156] ipam_plugin.go 264: Auto assigning IP ContainerID="df84471630a341de13b454ddb9b009e77d7ec326b9b967af793a816f3cfa8694" HandleID="k8s-pod-network.df84471630a341de13b454ddb9b009e77d7ec326b9b967af793a816f3cfa8694" Workload="localhost-k8s-csi--node--driver--dczfn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400059ae90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-dczfn", "timestamp":"2024-07-02 00:49:33.166075093 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:49:33.244848 containerd[1351]: 2024-07-02 00:49:33.194 [INFO][4156] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:49:33.244848 containerd[1351]: 2024-07-02 00:49:33.194 [INFO][4156] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:49:33.244848 containerd[1351]: 2024-07-02 00:49:33.194 [INFO][4156] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 00:49:33.244848 containerd[1351]: 2024-07-02 00:49:33.198 [INFO][4156] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.df84471630a341de13b454ddb9b009e77d7ec326b9b967af793a816f3cfa8694" host="localhost" Jul 2 00:49:33.244848 containerd[1351]: 2024-07-02 00:49:33.201 [INFO][4156] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 00:49:33.244848 containerd[1351]: 2024-07-02 00:49:33.205 [INFO][4156] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 00:49:33.244848 containerd[1351]: 2024-07-02 00:49:33.207 [INFO][4156] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 00:49:33.244848 containerd[1351]: 2024-07-02 00:49:33.210 [INFO][4156] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 00:49:33.244848 containerd[1351]: 2024-07-02 00:49:33.210 [INFO][4156] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.df84471630a341de13b454ddb9b009e77d7ec326b9b967af793a816f3cfa8694" host="localhost" Jul 2 00:49:33.244848 containerd[1351]: 2024-07-02 00:49:33.211 [INFO][4156] ipam.go 1685: Creating new handle: k8s-pod-network.df84471630a341de13b454ddb9b009e77d7ec326b9b967af793a816f3cfa8694 Jul 2 00:49:33.244848 containerd[1351]: 2024-07-02 00:49:33.216 [INFO][4156] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.df84471630a341de13b454ddb9b009e77d7ec326b9b967af793a816f3cfa8694" host="localhost" Jul 2 00:49:33.244848 containerd[1351]: 2024-07-02 00:49:33.221 [INFO][4156] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.df84471630a341de13b454ddb9b009e77d7ec326b9b967af793a816f3cfa8694" host="localhost" Jul 2 00:49:33.244848 containerd[1351]: 2024-07-02 00:49:33.221 [INFO][4156] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.df84471630a341de13b454ddb9b009e77d7ec326b9b967af793a816f3cfa8694" host="localhost" Jul 2 00:49:33.244848 containerd[1351]: 2024-07-02 00:49:33.221 [INFO][4156] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:49:33.244848 containerd[1351]: 2024-07-02 00:49:33.221 [INFO][4156] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="df84471630a341de13b454ddb9b009e77d7ec326b9b967af793a816f3cfa8694" HandleID="k8s-pod-network.df84471630a341de13b454ddb9b009e77d7ec326b9b967af793a816f3cfa8694" Workload="localhost-k8s-csi--node--driver--dczfn-eth0" Jul 2 00:49:33.245457 containerd[1351]: 2024-07-02 00:49:33.225 [INFO][4141] k8s.go 386: Populated endpoint ContainerID="df84471630a341de13b454ddb9b009e77d7ec326b9b967af793a816f3cfa8694" Namespace="calico-system" Pod="csi-node-driver-dczfn" WorkloadEndpoint="localhost-k8s-csi--node--driver--dczfn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dczfn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"565b958b-5d9c-4fe5-96de-2157ed8f17c7", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 49, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-dczfn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali5977f9ce63e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:49:33.245457 containerd[1351]: 2024-07-02 00:49:33.225 [INFO][4141] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="df84471630a341de13b454ddb9b009e77d7ec326b9b967af793a816f3cfa8694" Namespace="calico-system" Pod="csi-node-driver-dczfn" WorkloadEndpoint="localhost-k8s-csi--node--driver--dczfn-eth0" Jul 2 00:49:33.245457 containerd[1351]: 2024-07-02 00:49:33.225 [INFO][4141] dataplane_linux.go 68: Setting the host side veth name to cali5977f9ce63e ContainerID="df84471630a341de13b454ddb9b009e77d7ec326b9b967af793a816f3cfa8694" Namespace="calico-system" Pod="csi-node-driver-dczfn" WorkloadEndpoint="localhost-k8s-csi--node--driver--dczfn-eth0" Jul 2 00:49:33.245457 containerd[1351]: 2024-07-02 00:49:33.227 [INFO][4141] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="df84471630a341de13b454ddb9b009e77d7ec326b9b967af793a816f3cfa8694" Namespace="calico-system" Pod="csi-node-driver-dczfn" WorkloadEndpoint="localhost-k8s-csi--node--driver--dczfn-eth0" Jul 2 00:49:33.245457 containerd[1351]: 2024-07-02 00:49:33.234 [INFO][4141] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="df84471630a341de13b454ddb9b009e77d7ec326b9b967af793a816f3cfa8694" Namespace="calico-system" Pod="csi-node-driver-dczfn" WorkloadEndpoint="localhost-k8s-csi--node--driver--dczfn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dczfn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"565b958b-5d9c-4fe5-96de-2157ed8f17c7", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 49, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"df84471630a341de13b454ddb9b009e77d7ec326b9b967af793a816f3cfa8694", Pod:"csi-node-driver-dczfn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali5977f9ce63e", MAC:"7a:43:0b:49:2a:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:49:33.245457 containerd[1351]: 2024-07-02 00:49:33.241 [INFO][4141] k8s.go 500: Wrote updated endpoint to datastore ContainerID="df84471630a341de13b454ddb9b009e77d7ec326b9b967af793a816f3cfa8694" Namespace="calico-system" Pod="csi-node-driver-dczfn" WorkloadEndpoint="localhost-k8s-csi--node--driver--dczfn-eth0" Jul 2 00:49:33.271544 systemd-networkd[1144]: vxlan.calico: Link UP Jul 2 00:49:33.271549 systemd-networkd[1144]: vxlan.calico: Gained carrier Jul 2 00:49:33.276658 containerd[1351]: time="2024-07-02T00:49:33.276496751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:49:33.276863 containerd[1351]: time="2024-07-02T00:49:33.276810052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:49:33.276966 containerd[1351]: time="2024-07-02T00:49:33.276939340Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:49:33.277055 containerd[1351]: time="2024-07-02T00:49:33.277029986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:49:33.324000 audit: BPF prog-id=10 op=LOAD Jul 2 00:49:33.324000 audit[4262]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffce701ca8 a2=70 a3=ffffce701d18 items=0 ppid=4134 pid=4262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:33.324000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 2 00:49:33.325000 audit: BPF prog-id=10 op=UNLOAD Jul 2 00:49:33.325000 audit: BPF prog-id=11 op=LOAD Jul 2 00:49:33.325000 audit[4262]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffce701ca8 a2=70 a3=4b243c items=0 ppid=4134 pid=4262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:33.325000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 2 00:49:33.325000 audit: BPF prog-id=11 op=UNLOAD Jul 2 00:49:33.325000 audit: BPF prog-id=12 op=LOAD Jul 2 00:49:33.325000 audit[4262]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffce701c48 a2=70 a3=ffffce701cb8 items=0 ppid=4134 pid=4262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:33.325000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 2 00:49:33.325000 audit: BPF prog-id=12 op=UNLOAD Jul 2 00:49:33.328000 audit: BPF prog-id=13 op=LOAD Jul 2 00:49:33.328000 audit[4262]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffce701c78 a2=70 a3=1eb34a9 items=0 ppid=4134 pid=4262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:33.328000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 2 00:49:33.333974 systemd-resolved[1294]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:49:33.350000 audit: BPF prog-id=13 op=UNLOAD Jul 2 00:49:33.377178 containerd[1351]: time="2024-07-02T00:49:33.377133893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dczfn,Uid:565b958b-5d9c-4fe5-96de-2157ed8f17c7,Namespace:calico-system,Attempt:1,} returns sandbox id \"df84471630a341de13b454ddb9b009e77d7ec326b9b967af793a816f3cfa8694\"" Jul 2 00:49:33.480000 audit[4316]: NETFILTER_CFG table=nat:101 family=2 entries=15 op=nft_register_chain pid=4316 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 00:49:33.480000 audit[4316]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffe5eed450 a2=0 a3=ffff97af7fa8 items=0 ppid=4134 pid=4316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:33.480000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 00:49:33.482000 audit[4318]: NETFILTER_CFG table=mangle:102 family=2 entries=16 op=nft_register_chain pid=4318 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 00:49:33.482000 audit[4318]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffca513fb0 a2=0 a3=ffff8619dfa8 items=0 ppid=4134 pid=4318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:33.482000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 00:49:33.484000 audit[4315]: NETFILTER_CFG table=filter:103 family=2 entries=131 op=nft_register_chain pid=4315 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 00:49:33.484000 audit[4315]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=72432 a0=3 a1=ffffc49e2ac0 a2=0 a3=ffffb44fafa8 items=0 ppid=4134 pid=4315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:33.484000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 00:49:33.488000 audit[4314]: NETFILTER_CFG table=raw:104 family=2 entries=19 op=nft_register_chain pid=4314 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 00:49:33.488000 audit[4314]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6992 a0=3 a1=ffffe7087870 a2=0 a3=ffff928a5fa8 items=0 ppid=4134 pid=4314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:33.488000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 00:49:33.799574 containerd[1351]: time="2024-07-02T00:49:33.799458347Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:49:33.800335 containerd[1351]: time="2024-07-02T00:49:33.800298362Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=31361057" Jul 2 00:49:33.801366 containerd[1351]: time="2024-07-02T00:49:33.801331349Z" level=info msg="ImageCreate event name:\"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:49:33.802712 containerd[1351]: time="2024-07-02T00:49:33.802675716Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:49:33.804208 containerd[1351]: time="2024-07-02T00:49:33.804180494Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:49:33.804812 containerd[1351]: time="2024-07-02T00:49:33.804779373Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"32727593\" in 1.311478106s" Jul 2 00:49:33.804886 containerd[1351]: time="2024-07-02T00:49:33.804815695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\"" Jul 2 00:49:33.805933 containerd[1351]: time="2024-07-02T00:49:33.805907646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jul 2 00:49:33.814150 containerd[1351]: time="2024-07-02T00:49:33.814074977Z" level=info msg="CreateContainer within sandbox \"8b638965aa730f99f3ec7c0b8f17ea443872f5f62b97cbfa56be4835a9e21ce6\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 2 00:49:33.824285 containerd[1351]: time="2024-07-02T00:49:33.824217997Z" level=info msg="CreateContainer within sandbox \"8b638965aa730f99f3ec7c0b8f17ea443872f5f62b97cbfa56be4835a9e21ce6\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"959449dacf814a7782f7f53f65dfa3692100d993be4541b2f81a11a279a01abb\"" Jul 2 00:49:33.828171 containerd[1351]: time="2024-07-02T00:49:33.828132091Z" level=info msg="StartContainer for \"959449dacf814a7782f7f53f65dfa3692100d993be4541b2f81a11a279a01abb\"" Jul 2 00:49:33.892280 containerd[1351]: time="2024-07-02T00:49:33.885590026Z" level=info msg="StartContainer for \"959449dacf814a7782f7f53f65dfa3692100d993be4541b2f81a11a279a01abb\" returns successfully" Jul 2 00:49:33.979607 containerd[1351]: time="2024-07-02T00:49:33.979374723Z" level=info msg="StopPodSandbox for \"ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c\"" Jul 2 00:49:34.079345 containerd[1351]: 2024-07-02 00:49:34.044 [INFO][4381] k8s.go 608: Cleaning up netns ContainerID="ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c" Jul 2 00:49:34.079345 containerd[1351]: 2024-07-02 00:49:34.044 [INFO][4381] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c" iface="eth0" netns="/var/run/netns/cni-54cf1c45-e232-a162-72c6-323a31161bea" Jul 2 00:49:34.079345 containerd[1351]: 2024-07-02 00:49:34.045 [INFO][4381] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c" iface="eth0" netns="/var/run/netns/cni-54cf1c45-e232-a162-72c6-323a31161bea" Jul 2 00:49:34.079345 containerd[1351]: 2024-07-02 00:49:34.045 [INFO][4381] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c" iface="eth0" netns="/var/run/netns/cni-54cf1c45-e232-a162-72c6-323a31161bea" Jul 2 00:49:34.079345 containerd[1351]: 2024-07-02 00:49:34.045 [INFO][4381] k8s.go 615: Releasing IP address(es) ContainerID="ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c" Jul 2 00:49:34.079345 containerd[1351]: 2024-07-02 00:49:34.045 [INFO][4381] utils.go 188: Calico CNI releasing IP address ContainerID="ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c" Jul 2 00:49:34.079345 containerd[1351]: 2024-07-02 00:49:34.064 [INFO][4388] ipam_plugin.go 411: Releasing address using handleID ContainerID="ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c" HandleID="k8s-pod-network.ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c" Workload="localhost-k8s-coredns--5dd5756b68--b89tp-eth0" Jul 2 00:49:34.079345 containerd[1351]: 2024-07-02 00:49:34.064 [INFO][4388] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:49:34.079345 containerd[1351]: 2024-07-02 00:49:34.064 [INFO][4388] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:49:34.079345 containerd[1351]: 2024-07-02 00:49:34.072 [WARNING][4388] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c" HandleID="k8s-pod-network.ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c" Workload="localhost-k8s-coredns--5dd5756b68--b89tp-eth0" Jul 2 00:49:34.079345 containerd[1351]: 2024-07-02 00:49:34.072 [INFO][4388] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c" HandleID="k8s-pod-network.ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c" Workload="localhost-k8s-coredns--5dd5756b68--b89tp-eth0" Jul 2 00:49:34.079345 containerd[1351]: 2024-07-02 00:49:34.074 [INFO][4388] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:49:34.079345 containerd[1351]: 2024-07-02 00:49:34.076 [INFO][4381] k8s.go 621: Teardown processing complete. ContainerID="ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c" Jul 2 00:49:34.079345 containerd[1351]: time="2024-07-02T00:49:34.078910288Z" level=info msg="TearDown network for sandbox \"ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c\" successfully" Jul 2 00:49:34.079345 containerd[1351]: time="2024-07-02T00:49:34.078941250Z" level=info msg="StopPodSandbox for \"ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c\" returns successfully" Jul 2 00:49:34.079812 kubelet[2384]: E0702 00:49:34.079237 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:49:34.080099 containerd[1351]: time="2024-07-02T00:49:34.079878830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-b89tp,Uid:4e4dbead-dbec-4913-9f82-ae7fdc8be31c,Namespace:kube-system,Attempt:1,}" Jul 2 00:49:34.105330 systemd-networkd[1144]: cali1428750659c: Gained IPv6LL Jul 2 00:49:34.162415 systemd-networkd[1144]: cali1f8645b4ce9: Gained IPv6LL Jul 2 00:49:34.174084 kubelet[2384]: E0702 00:49:34.173671 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:49:34.185491 systemd[1]: run-netns-cni\x2d54cf1c45\x2de232\x2da162\x2d72c6\x2d323a31161bea.mount: Deactivated successfully. Jul 2 00:49:34.196437 kubelet[2384]: I0702 00:49:34.196398 2384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-f49f44d4b-xn8pc" podStartSLOduration=20.884023601 podCreationTimestamp="2024-07-02 00:49:12 +0000 UTC" firstStartedPulling="2024-07-02 00:49:32.492773911 +0000 UTC m=+40.637774081" lastFinishedPulling="2024-07-02 00:49:33.805110474 +0000 UTC m=+41.950110644" observedRunningTime="2024-07-02 00:49:34.195922376 +0000 UTC m=+42.340922546" watchObservedRunningTime="2024-07-02 00:49:34.196360164 +0000 UTC m=+42.341360294" Jul 2 00:49:34.200000 audit[4415]: NETFILTER_CFG table=filter:105 family=2 entries=11 op=nft_register_rule pid=4415 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 00:49:34.200000 audit[4415]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffc4978720 a2=0 a3=1 items=0 ppid=2587 pid=4415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:34.200000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 00:49:34.201000 audit[4415]: NETFILTER_CFG table=nat:106 family=2 entries=35 op=nft_register_chain pid=4415 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 00:49:34.201000 audit[4415]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffc4978720 a2=0 a3=1 items=0 ppid=2587 pid=4415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:34.201000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 00:49:34.286442 systemd-networkd[1144]: cali718d2666cf6: Link UP Jul 2 00:49:34.287514 systemd-networkd[1144]: cali718d2666cf6: Gained carrier Jul 2 00:49:34.288710 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali718d2666cf6: link becomes ready Jul 2 00:49:34.303861 containerd[1351]: 2024-07-02 00:49:34.168 [INFO][4396] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--b89tp-eth0 coredns-5dd5756b68- kube-system 4e4dbead-dbec-4913-9f82-ae7fdc8be31c 803 0 2024-07-02 00:49:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-5dd5756b68-b89tp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali718d2666cf6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1857463cb4e4f1fdd353d03dc57fcc263ed1c5e2cc03446ddfafb6bb3e460c2a" Namespace="kube-system" Pod="coredns-5dd5756b68-b89tp" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--b89tp-" Jul 2 00:49:34.303861 containerd[1351]: 2024-07-02 00:49:34.168 [INFO][4396] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1857463cb4e4f1fdd353d03dc57fcc263ed1c5e2cc03446ddfafb6bb3e460c2a" Namespace="kube-system" Pod="coredns-5dd5756b68-b89tp" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--b89tp-eth0" Jul 2 00:49:34.303861 containerd[1351]: 2024-07-02 00:49:34.237 [INFO][4410] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1857463cb4e4f1fdd353d03dc57fcc263ed1c5e2cc03446ddfafb6bb3e460c2a" HandleID="k8s-pod-network.1857463cb4e4f1fdd353d03dc57fcc263ed1c5e2cc03446ddfafb6bb3e460c2a" Workload="localhost-k8s-coredns--5dd5756b68--b89tp-eth0" Jul 2 00:49:34.303861 containerd[1351]: 2024-07-02 00:49:34.250 [INFO][4410] ipam_plugin.go 264: Auto assigning IP ContainerID="1857463cb4e4f1fdd353d03dc57fcc263ed1c5e2cc03446ddfafb6bb3e460c2a" HandleID="k8s-pod-network.1857463cb4e4f1fdd353d03dc57fcc263ed1c5e2cc03446ddfafb6bb3e460c2a" Workload="localhost-k8s-coredns--5dd5756b68--b89tp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cc7b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-b89tp", "timestamp":"2024-07-02 00:49:34.236996831 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:49:34.303861 containerd[1351]: 2024-07-02 00:49:34.250 [INFO][4410] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:49:34.303861 containerd[1351]: 2024-07-02 00:49:34.250 [INFO][4410] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:49:34.303861 containerd[1351]: 2024-07-02 00:49:34.250 [INFO][4410] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 00:49:34.303861 containerd[1351]: 2024-07-02 00:49:34.253 [INFO][4410] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1857463cb4e4f1fdd353d03dc57fcc263ed1c5e2cc03446ddfafb6bb3e460c2a" host="localhost" Jul 2 00:49:34.303861 containerd[1351]: 2024-07-02 00:49:34.257 [INFO][4410] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 00:49:34.303861 containerd[1351]: 2024-07-02 00:49:34.262 [INFO][4410] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 00:49:34.303861 containerd[1351]: 2024-07-02 00:49:34.263 [INFO][4410] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 00:49:34.303861 containerd[1351]: 2024-07-02 00:49:34.265 [INFO][4410] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 00:49:34.303861 containerd[1351]: 2024-07-02 00:49:34.265 [INFO][4410] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1857463cb4e4f1fdd353d03dc57fcc263ed1c5e2cc03446ddfafb6bb3e460c2a" host="localhost" Jul 2 00:49:34.303861 containerd[1351]: 2024-07-02 00:49:34.267 [INFO][4410] ipam.go 1685: Creating new handle: k8s-pod-network.1857463cb4e4f1fdd353d03dc57fcc263ed1c5e2cc03446ddfafb6bb3e460c2a Jul 2 00:49:34.303861 containerd[1351]: 2024-07-02 00:49:34.270 [INFO][4410] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1857463cb4e4f1fdd353d03dc57fcc263ed1c5e2cc03446ddfafb6bb3e460c2a" host="localhost" Jul 2 00:49:34.303861 containerd[1351]: 2024-07-02 00:49:34.279 [INFO][4410] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.1857463cb4e4f1fdd353d03dc57fcc263ed1c5e2cc03446ddfafb6bb3e460c2a" host="localhost" Jul 2 00:49:34.303861 containerd[1351]: 2024-07-02 00:49:34.279 [INFO][4410] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.1857463cb4e4f1fdd353d03dc57fcc263ed1c5e2cc03446ddfafb6bb3e460c2a" host="localhost" Jul 2 00:49:34.303861 containerd[1351]: 2024-07-02 00:49:34.279 [INFO][4410] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:49:34.303861 containerd[1351]: 2024-07-02 00:49:34.279 [INFO][4410] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="1857463cb4e4f1fdd353d03dc57fcc263ed1c5e2cc03446ddfafb6bb3e460c2a" HandleID="k8s-pod-network.1857463cb4e4f1fdd353d03dc57fcc263ed1c5e2cc03446ddfafb6bb3e460c2a" Workload="localhost-k8s-coredns--5dd5756b68--b89tp-eth0" Jul 2 00:49:34.304467 containerd[1351]: 2024-07-02 00:49:34.283 [INFO][4396] k8s.go 386: Populated endpoint ContainerID="1857463cb4e4f1fdd353d03dc57fcc263ed1c5e2cc03446ddfafb6bb3e460c2a" Namespace="kube-system" Pod="coredns-5dd5756b68-b89tp" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--b89tp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--b89tp-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"4e4dbead-dbec-4913-9f82-ae7fdc8be31c", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 49, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-b89tp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali718d2666cf6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:49:34.304467 containerd[1351]: 2024-07-02 00:49:34.284 [INFO][4396] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="1857463cb4e4f1fdd353d03dc57fcc263ed1c5e2cc03446ddfafb6bb3e460c2a" Namespace="kube-system" Pod="coredns-5dd5756b68-b89tp" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--b89tp-eth0" Jul 2 00:49:34.304467 containerd[1351]: 2024-07-02 00:49:34.284 [INFO][4396] dataplane_linux.go 68: Setting the host side veth name to cali718d2666cf6 ContainerID="1857463cb4e4f1fdd353d03dc57fcc263ed1c5e2cc03446ddfafb6bb3e460c2a" Namespace="kube-system" Pod="coredns-5dd5756b68-b89tp" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--b89tp-eth0" Jul 2 00:49:34.304467 containerd[1351]: 2024-07-02 00:49:34.287 [INFO][4396] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1857463cb4e4f1fdd353d03dc57fcc263ed1c5e2cc03446ddfafb6bb3e460c2a" Namespace="kube-system" Pod="coredns-5dd5756b68-b89tp" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--b89tp-eth0" Jul 2 00:49:34.304467 containerd[1351]: 2024-07-02 00:49:34.288 [INFO][4396] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1857463cb4e4f1fdd353d03dc57fcc263ed1c5e2cc03446ddfafb6bb3e460c2a" Namespace="kube-system" Pod="coredns-5dd5756b68-b89tp" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--b89tp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--b89tp-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"4e4dbead-dbec-4913-9f82-ae7fdc8be31c", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 49, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1857463cb4e4f1fdd353d03dc57fcc263ed1c5e2cc03446ddfafb6bb3e460c2a", Pod:"coredns-5dd5756b68-b89tp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali718d2666cf6", MAC:"ae:c9:86:62:9f:6f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:49:34.304467 containerd[1351]: 2024-07-02 00:49:34.300 [INFO][4396] k8s.go 500: Wrote updated endpoint to datastore ContainerID="1857463cb4e4f1fdd353d03dc57fcc263ed1c5e2cc03446ddfafb6bb3e460c2a" Namespace="kube-system" Pod="coredns-5dd5756b68-b89tp" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--b89tp-eth0" Jul 2 00:49:34.311000 audit[4431]: NETFILTER_CFG table=filter:107 family=2 entries=34 op=nft_register_chain pid=4431 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 00:49:34.311000 audit[4431]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=18204 a0=3 a1=ffffe6d30870 a2=0 a3=ffff83823fa8 items=0 ppid=4134 pid=4431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:34.311000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 00:49:34.383982 containerd[1351]: time="2024-07-02T00:49:34.383903102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:49:34.384160 containerd[1351]: time="2024-07-02T00:49:34.383970226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:49:34.384160 containerd[1351]: time="2024-07-02T00:49:34.383988588Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:49:34.384160 containerd[1351]: time="2024-07-02T00:49:34.384003989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:49:34.401868 systemd[1]: run-containerd-runc-k8s.io-1857463cb4e4f1fdd353d03dc57fcc263ed1c5e2cc03446ddfafb6bb3e460c2a-runc.ctNg3s.mount: Deactivated successfully. Jul 2 00:49:34.409691 systemd-resolved[1294]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:49:34.431797 containerd[1351]: time="2024-07-02T00:49:34.431745828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-b89tp,Uid:4e4dbead-dbec-4913-9f82-ae7fdc8be31c,Namespace:kube-system,Attempt:1,} returns sandbox id \"1857463cb4e4f1fdd353d03dc57fcc263ed1c5e2cc03446ddfafb6bb3e460c2a\"" Jul 2 00:49:34.432503 kubelet[2384]: E0702 00:49:34.432484 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:49:34.445402 containerd[1351]: time="2024-07-02T00:49:34.445353774Z" level=info msg="CreateContainer within sandbox \"1857463cb4e4f1fdd353d03dc57fcc263ed1c5e2cc03446ddfafb6bb3e460c2a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:49:34.456974 containerd[1351]: time="2024-07-02T00:49:34.456924310Z" level=info msg="CreateContainer within sandbox \"1857463cb4e4f1fdd353d03dc57fcc263ed1c5e2cc03446ddfafb6bb3e460c2a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4a9de45f6fad01594344ddea75d7d04fc6ca6c17d858cd411a9f9a08f213d78f\"" Jul 2 00:49:34.457348 containerd[1351]: time="2024-07-02T00:49:34.457322496Z" level=info msg="StartContainer for \"4a9de45f6fad01594344ddea75d7d04fc6ca6c17d858cd411a9f9a08f213d78f\"" Jul 2 00:49:34.496511 containerd[1351]: time="2024-07-02T00:49:34.496466907Z" level=info msg="StartContainer for \"4a9de45f6fad01594344ddea75d7d04fc6ca6c17d858cd411a9f9a08f213d78f\" returns successfully" Jul 2 00:49:34.610404 systemd-networkd[1144]: cali5977f9ce63e: Gained IPv6LL Jul 2 00:49:34.704969 containerd[1351]: time="2024-07-02T00:49:34.704520191Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:49:34.705712 containerd[1351]: time="2024-07-02T00:49:34.705680065Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7210579" Jul 2 00:49:34.706505 containerd[1351]: time="2024-07-02T00:49:34.706474715Z" level=info msg="ImageCreate event name:\"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:49:34.708390 containerd[1351]: time="2024-07-02T00:49:34.708359435Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:49:34.709680 containerd[1351]: time="2024-07-02T00:49:34.709643317Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:49:34.711070 containerd[1351]: time="2024-07-02T00:49:34.711023165Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"8577147\" in 904.978029ms" Jul 2 00:49:34.711116 containerd[1351]: time="2024-07-02T00:49:34.711066487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\"" Jul 2 00:49:34.714653 containerd[1351]: time="2024-07-02T00:49:34.714601833Z" level=info msg="CreateContainer within sandbox \"df84471630a341de13b454ddb9b009e77d7ec326b9b967af793a816f3cfa8694\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 2 00:49:34.726371 containerd[1351]: time="2024-07-02T00:49:34.726303657Z" level=info msg="CreateContainer within sandbox \"df84471630a341de13b454ddb9b009e77d7ec326b9b967af793a816f3cfa8694\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a679a7f9a5eb5e039e211e1bdde8842f6b02a95c205ee59873ccf6c3e8e32de7\"" Jul 2 00:49:34.726886 containerd[1351]: time="2024-07-02T00:49:34.726849052Z" level=info msg="StartContainer for \"a679a7f9a5eb5e039e211e1bdde8842f6b02a95c205ee59873ccf6c3e8e32de7\"" Jul 2 00:49:34.784565 containerd[1351]: time="2024-07-02T00:49:34.784511123Z" level=info msg="StartContainer for \"a679a7f9a5eb5e039e211e1bdde8842f6b02a95c205ee59873ccf6c3e8e32de7\" returns successfully" Jul 2 00:49:34.786848 containerd[1351]: time="2024-07-02T00:49:34.786048660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jul 2 00:49:34.929436 systemd-networkd[1144]: vxlan.calico: Gained IPv6LL Jul 2 00:49:35.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.149:22-10.0.0.1:58094 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:49:35.070618 systemd[1]: Started sshd@9-10.0.0.149:22-10.0.0.1:58094.service - OpenSSH per-connection server daemon (10.0.0.1:58094). Jul 2 00:49:35.071482 kernel: kauditd_printk_skb: 50 callbacks suppressed Jul 2 00:49:35.071516 kernel: audit: type=1130 audit(1719881375.069:327): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.149:22-10.0.0.1:58094 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:49:35.106000 audit[4555]: USER_ACCT pid=4555 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:35.108161 sshd[4555]: Accepted publickey for core from 10.0.0.1 port 58094 ssh2: RSA SHA256:Kp5j1k40JRjOvdBxgA97BAtl42hDR/cCrPEvDSx0HPk Jul 2 00:49:35.107000 audit[4555]: CRED_ACQ pid=4555 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:35.111217 sshd[4555]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:49:35.113198 kernel: audit: type=1101 audit(1719881375.106:328): pid=4555 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:35.113279 kernel: audit: type=1103 audit(1719881375.107:329): pid=4555 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:35.115368 kernel: audit: type=1006 audit(1719881375.108:330): pid=4555 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jul 2 00:49:35.115430 kernel: audit: type=1300 audit(1719881375.108:330): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffff4dbd10 a2=3 a3=1 items=0 ppid=1 pid=4555 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:35.108000 audit[4555]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffff4dbd10 a2=3 a3=1 items=0 ppid=1 pid=4555 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:35.108000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 00:49:35.118901 kernel: audit: type=1327 audit(1719881375.108:330): proctitle=737368643A20636F7265205B707269765D Jul 2 00:49:35.121228 systemd-logind[1334]: New session 10 of user core. Jul 2 00:49:35.131507 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 00:49:35.133000 audit[4555]: USER_START pid=4555 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:35.138265 kernel: audit: type=1105 audit(1719881375.133:331): pid=4555 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:35.138356 kernel: audit: type=1103 audit(1719881375.136:332): pid=4558 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:35.136000 audit[4558]: CRED_ACQ pid=4558 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:35.178283 kubelet[2384]: E0702 00:49:35.177025 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:49:35.191198 kubelet[2384]: I0702 00:49:35.190599 2384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-b89tp" podStartSLOduration=28.190559049 podCreationTimestamp="2024-07-02 00:49:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:49:35.185365925 +0000 UTC m=+43.330366095" watchObservedRunningTime="2024-07-02 00:49:35.190559049 +0000 UTC m=+43.335559219" Jul 2 00:49:35.207192 kubelet[2384]: I0702 00:49:35.198610 2384 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 00:49:35.207192 kubelet[2384]: E0702 00:49:35.199292 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:49:35.208000 audit[4568]: NETFILTER_CFG table=filter:108 family=2 entries=8 op=nft_register_rule pid=4568 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 00:49:35.208000 audit[4568]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffd651f0b0 a2=0 a3=1 items=0 ppid=2587 pid=4568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:35.214554 kernel: audit: type=1325 audit(1719881375.208:333): table=filter:108 family=2 entries=8 op=nft_register_rule pid=4568 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 00:49:35.214608 kernel: audit: type=1300 audit(1719881375.208:333): arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffd651f0b0 a2=0 a3=1 items=0 ppid=2587 pid=4568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:35.208000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 00:49:35.209000 audit[4568]: NETFILTER_CFG table=nat:109 family=2 entries=44 op=nft_register_rule pid=4568 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 00:49:35.209000 audit[4568]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffd651f0b0 a2=0 a3=1 items=0 ppid=2587 pid=4568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:35.209000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 00:49:35.344711 sshd[4555]: pam_unix(sshd:session): session closed for user core Jul 2 00:49:35.345000 audit[4555]: USER_END pid=4555 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:35.346000 audit[4555]: CRED_DISP pid=4555 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:35.352634 systemd[1]: Started sshd@10-10.0.0.149:22-10.0.0.1:58110.service - OpenSSH per-connection server daemon (10.0.0.1:58110). Jul 2 00:49:35.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.149:22-10.0.0.1:58110 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:49:35.353381 systemd[1]: sshd@9-10.0.0.149:22-10.0.0.1:58094.service: Deactivated successfully. Jul 2 00:49:35.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.149:22-10.0.0.1:58094 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:49:35.354420 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 00:49:35.355006 systemd-logind[1334]: Session 10 logged out. Waiting for processes to exit. Jul 2 00:49:35.355944 systemd-logind[1334]: Removed session 10. Jul 2 00:49:35.382000 audit[4571]: USER_ACCT pid=4571 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:35.383023 sshd[4571]: Accepted publickey for core from 10.0.0.1 port 58110 ssh2: RSA SHA256:Kp5j1k40JRjOvdBxgA97BAtl42hDR/cCrPEvDSx0HPk Jul 2 00:49:35.383000 audit[4571]: CRED_ACQ pid=4571 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:35.383000 audit[4571]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd946cf20 a2=3 a3=1 items=0 ppid=1 pid=4571 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:35.383000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 00:49:35.384362 sshd[4571]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:49:35.388106 systemd-logind[1334]: New session 11 of user core. Jul 2 00:49:35.401532 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 00:49:35.405000 audit[4571]: USER_START pid=4571 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:35.406000 audit[4575]: CRED_ACQ pid=4575 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:35.761269 sshd[4571]: pam_unix(sshd:session): session closed for user core Jul 2 00:49:35.768896 systemd[1]: Started sshd@11-10.0.0.149:22-10.0.0.1:58124.service - OpenSSH per-connection server daemon (10.0.0.1:58124). Jul 2 00:49:35.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.149:22-10.0.0.1:58124 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:49:35.777000 audit[4571]: USER_END pid=4571 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:35.777000 audit[4571]: CRED_DISP pid=4571 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:35.779561 systemd[1]: sshd@10-10.0.0.149:22-10.0.0.1:58110.service: Deactivated successfully. Jul 2 00:49:35.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.149:22-10.0.0.1:58110 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:49:35.780835 systemd-logind[1334]: Session 11 logged out. Waiting for processes to exit. Jul 2 00:49:35.780913 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 00:49:35.782583 systemd-logind[1334]: Removed session 11. Jul 2 00:49:35.815073 sshd[4587]: Accepted publickey for core from 10.0.0.1 port 58124 ssh2: RSA SHA256:Kp5j1k40JRjOvdBxgA97BAtl42hDR/cCrPEvDSx0HPk Jul 2 00:49:35.814000 audit[4587]: USER_ACCT pid=4587 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:35.815000 audit[4587]: CRED_ACQ pid=4587 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:35.816000 audit[4587]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff00d6fd0 a2=3 a3=1 items=0 ppid=1 pid=4587 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:35.816000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 00:49:35.816619 sshd[4587]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:49:35.834193 systemd-logind[1334]: New session 12 of user core. Jul 2 00:49:35.841506 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 00:49:35.846000 audit[4587]: USER_START pid=4587 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:35.847000 audit[4595]: CRED_ACQ pid=4595 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:35.907347 containerd[1351]: time="2024-07-02T00:49:35.907302045Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:49:35.908237 containerd[1351]: time="2024-07-02T00:49:35.908194420Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=9548567" Jul 2 00:49:35.909218 containerd[1351]: time="2024-07-02T00:49:35.909170281Z" level=info msg="ImageCreate event name:\"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:49:35.912982 containerd[1351]: time="2024-07-02T00:49:35.912759585Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:49:35.913946 containerd[1351]: time="2024-07-02T00:49:35.913915777Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:49:35.915182 containerd[1351]: time="2024-07-02T00:49:35.915118892Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"10915087\" in 1.129025469s" Jul 2 00:49:35.915182 containerd[1351]: time="2024-07-02T00:49:35.915176416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\"" Jul 2 00:49:35.917718 containerd[1351]: time="2024-07-02T00:49:35.917680692Z" level=info msg="CreateContainer within sandbox \"df84471630a341de13b454ddb9b009e77d7ec326b9b967af793a816f3cfa8694\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 2 00:49:35.932627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2770809162.mount: Deactivated successfully. Jul 2 00:49:35.937973 containerd[1351]: time="2024-07-02T00:49:35.937924155Z" level=info msg="CreateContainer within sandbox \"df84471630a341de13b454ddb9b009e77d7ec326b9b967af793a816f3cfa8694\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"21a42929bc947e0ad692ca070fccf868a16efc54768f8bed889ec553d818ac37\"" Jul 2 00:49:35.939172 containerd[1351]: time="2024-07-02T00:49:35.939088268Z" level=info msg="StartContainer for \"21a42929bc947e0ad692ca070fccf868a16efc54768f8bed889ec553d818ac37\"" Jul 2 00:49:35.953413 systemd-networkd[1144]: cali718d2666cf6: Gained IPv6LL Jul 2 00:49:36.009649 containerd[1351]: time="2024-07-02T00:49:36.009592815Z" level=info msg="StartContainer for \"21a42929bc947e0ad692ca070fccf868a16efc54768f8bed889ec553d818ac37\" returns successfully" Jul 2 00:49:36.045408 sshd[4587]: pam_unix(sshd:session): session closed for user core Jul 2 00:49:36.046000 audit[4587]: USER_END pid=4587 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:36.046000 audit[4587]: CRED_DISP pid=4587 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:36.049498 systemd[1]: sshd@11-10.0.0.149:22-10.0.0.1:58124.service: Deactivated successfully. Jul 2 00:49:36.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.149:22-10.0.0.1:58124 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:49:36.050792 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 00:49:36.051294 systemd-logind[1334]: Session 12 logged out. Waiting for processes to exit. Jul 2 00:49:36.052329 systemd-logind[1334]: Removed session 12. Jul 2 00:49:36.184330 systemd[1]: run-containerd-runc-k8s.io-21a42929bc947e0ad692ca070fccf868a16efc54768f8bed889ec553d818ac37-runc.XZU4L6.mount: Deactivated successfully. Jul 2 00:49:36.204084 kubelet[2384]: E0702 00:49:36.203625 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:49:36.223000 audit[4645]: NETFILTER_CFG table=filter:110 family=2 entries=8 op=nft_register_rule pid=4645 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 00:49:36.223000 audit[4645]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=fffffc2f72d0 a2=0 a3=1 items=0 ppid=2587 pid=4645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:36.223000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 00:49:36.230000 audit[4645]: NETFILTER_CFG table=nat:111 family=2 entries=56 op=nft_register_chain pid=4645 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 00:49:36.230000 audit[4645]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=fffffc2f72d0 a2=0 a3=1 items=0 ppid=2587 pid=4645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:36.230000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 00:49:37.064902 kubelet[2384]: I0702 00:49:37.064856 2384 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 2 00:49:37.068392 kubelet[2384]: I0702 00:49:37.068367 2384 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 2 00:49:37.208673 kubelet[2384]: E0702 00:49:37.208645 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:49:41.056623 systemd[1]: Started sshd@12-10.0.0.149:22-10.0.0.1:40258.service - OpenSSH per-connection server daemon (10.0.0.1:40258). Jul 2 00:49:41.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.149:22-10.0.0.1:40258 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:49:41.057611 kernel: kauditd_printk_skb: 35 callbacks suppressed Jul 2 00:49:41.057687 kernel: audit: type=1130 audit(1719881381.055:358): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.149:22-10.0.0.1:40258 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:49:41.086000 audit[4657]: USER_ACCT pid=4657 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:41.087821 sshd[4657]: Accepted publickey for core from 10.0.0.1 port 40258 ssh2: RSA SHA256:Kp5j1k40JRjOvdBxgA97BAtl42hDR/cCrPEvDSx0HPk Jul 2 00:49:41.089391 sshd[4657]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:49:41.087000 audit[4657]: CRED_ACQ pid=4657 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:41.092090 kernel: audit: type=1101 audit(1719881381.086:359): pid=4657 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:41.092132 kernel: audit: type=1103 audit(1719881381.087:360): pid=4657 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:41.093623 kernel: audit: type=1006 audit(1719881381.087:361): pid=4657 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jul 2 00:49:41.087000 audit[4657]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe27bfc50 a2=3 a3=1 items=0 ppid=1 pid=4657 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:41.096169 kernel: audit: type=1300 audit(1719881381.087:361): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe27bfc50 a2=3 a3=1 items=0 ppid=1 pid=4657 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:41.096449 kernel: audit: type=1327 audit(1719881381.087:361): proctitle=737368643A20636F7265205B707269765D Jul 2 00:49:41.087000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 00:49:41.098337 systemd-logind[1334]: New session 13 of user core. Jul 2 00:49:41.106540 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 00:49:41.108000 audit[4657]: USER_START pid=4657 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:41.111000 audit[4660]: CRED_ACQ pid=4660 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:41.113337 kernel: audit: type=1105 audit(1719881381.108:362): pid=4657 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:41.116265 kernel: audit: type=1103 audit(1719881381.111:363): pid=4660 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:41.275542 sshd[4657]: pam_unix(sshd:session): session closed for user core Jul 2 00:49:41.275000 audit[4657]: USER_END pid=4657 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:41.275000 audit[4657]: CRED_DISP pid=4657 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:41.283573 kernel: audit: type=1106 audit(1719881381.275:364): pid=4657 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:41.283647 kernel: audit: type=1104 audit(1719881381.275:365): pid=4657 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:41.294672 systemd[1]: Started sshd@13-10.0.0.149:22-10.0.0.1:40260.service - OpenSSH per-connection server daemon (10.0.0.1:40260). Jul 2 00:49:41.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.149:22-10.0.0.1:40260 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:49:41.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.149:22-10.0.0.1:40258 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:49:41.295259 systemd[1]: sshd@12-10.0.0.149:22-10.0.0.1:40258.service: Deactivated successfully. Jul 2 00:49:41.296342 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 00:49:41.297349 systemd-logind[1334]: Session 13 logged out. Waiting for processes to exit. Jul 2 00:49:41.299724 systemd-logind[1334]: Removed session 13. Jul 2 00:49:41.327000 audit[4669]: USER_ACCT pid=4669 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:41.328901 sshd[4669]: Accepted publickey for core from 10.0.0.1 port 40260 ssh2: RSA SHA256:Kp5j1k40JRjOvdBxgA97BAtl42hDR/cCrPEvDSx0HPk Jul 2 00:49:41.329000 audit[4669]: CRED_ACQ pid=4669 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:41.329000 audit[4669]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd1f3c9b0 a2=3 a3=1 items=0 ppid=1 pid=4669 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:41.329000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 00:49:41.330684 sshd[4669]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:49:41.340571 systemd-logind[1334]: New session 14 of user core. Jul 2 00:49:41.350510 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 00:49:41.354000 audit[4669]: USER_START pid=4669 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:41.355000 audit[4674]: CRED_ACQ pid=4674 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:41.583654 sshd[4669]: pam_unix(sshd:session): session closed for user core Jul 2 00:49:41.583000 audit[4669]: USER_END pid=4669 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:41.584000 audit[4669]: CRED_DISP pid=4669 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:41.589822 systemd[1]: Started sshd@14-10.0.0.149:22-10.0.0.1:40266.service - OpenSSH per-connection server daemon (10.0.0.1:40266). Jul 2 00:49:41.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.149:22-10.0.0.1:40266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:49:41.590556 systemd[1]: sshd@13-10.0.0.149:22-10.0.0.1:40260.service: Deactivated successfully. Jul 2 00:49:41.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.149:22-10.0.0.1:40260 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:49:41.591775 systemd-logind[1334]: Session 14 logged out. Waiting for processes to exit. Jul 2 00:49:41.591818 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 00:49:41.593915 systemd-logind[1334]: Removed session 14. Jul 2 00:49:41.625000 audit[4682]: USER_ACCT pid=4682 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:41.626952 sshd[4682]: Accepted publickey for core from 10.0.0.1 port 40266 ssh2: RSA SHA256:Kp5j1k40JRjOvdBxgA97BAtl42hDR/cCrPEvDSx0HPk Jul 2 00:49:41.627000 audit[4682]: CRED_ACQ pid=4682 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:41.627000 audit[4682]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdfb46eb0 a2=3 a3=1 items=0 ppid=1 pid=4682 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:41.627000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 00:49:41.628649 sshd[4682]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:49:41.632628 systemd-logind[1334]: New session 15 of user core. Jul 2 00:49:41.641524 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 00:49:41.643000 audit[4682]: USER_START pid=4682 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:41.645000 audit[4687]: CRED_ACQ pid=4687 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:42.348000 audit[4698]: NETFILTER_CFG table=filter:112 family=2 entries=20 op=nft_register_rule pid=4698 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 00:49:42.348000 audit[4698]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11860 a0=3 a1=ffffd22d8910 a2=0 a3=1 items=0 ppid=2587 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:42.348000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 00:49:42.351360 sshd[4682]: pam_unix(sshd:session): session closed for user core Jul 2 00:49:42.351000 audit[4682]: USER_END pid=4682 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:42.351000 audit[4682]: CRED_DISP pid=4682 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:42.349000 audit[4698]: NETFILTER_CFG table=nat:113 family=2 entries=20 op=nft_register_rule pid=4698 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 00:49:42.349000 audit[4698]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd22d8910 a2=0 a3=1 items=0 ppid=2587 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:42.349000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 00:49:42.356660 systemd[1]: Started sshd@15-10.0.0.149:22-10.0.0.1:40276.service - OpenSSH per-connection server daemon (10.0.0.1:40276). Jul 2 00:49:42.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.149:22-10.0.0.1:40276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:49:42.357231 systemd[1]: sshd@14-10.0.0.149:22-10.0.0.1:40266.service: Deactivated successfully. Jul 2 00:49:42.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.149:22-10.0.0.1:40266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:49:42.358592 systemd-logind[1334]: Session 15 logged out. Waiting for processes to exit. Jul 2 00:49:42.358972 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 00:49:42.360428 systemd-logind[1334]: Removed session 15. Jul 2 00:49:42.366000 audit[4703]: NETFILTER_CFG table=filter:114 family=2 entries=32 op=nft_register_rule pid=4703 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 00:49:42.366000 audit[4703]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11860 a0=3 a1=ffffde96c890 a2=0 a3=1 items=0 ppid=2587 pid=4703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:42.366000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 00:49:42.368000 audit[4703]: NETFILTER_CFG table=nat:115 family=2 entries=20 op=nft_register_rule pid=4703 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 00:49:42.368000 audit[4703]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffde96c890 a2=0 a3=1 items=0 ppid=2587 pid=4703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:42.368000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 00:49:42.392000 audit[4699]: USER_ACCT pid=4699 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:42.393977 sshd[4699]: Accepted publickey for core from 10.0.0.1 port 40276 ssh2: RSA SHA256:Kp5j1k40JRjOvdBxgA97BAtl42hDR/cCrPEvDSx0HPk Jul 2 00:49:42.393000 audit[4699]: CRED_ACQ pid=4699 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:42.393000 audit[4699]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc90d2ee0 a2=3 a3=1 items=0 ppid=1 pid=4699 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:42.393000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 00:49:42.395039 sshd[4699]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:49:42.398576 systemd-logind[1334]: New session 16 of user core. Jul 2 00:49:42.407511 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 00:49:42.409000 audit[4699]: USER_START pid=4699 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:42.411000 audit[4706]: CRED_ACQ pid=4706 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:42.879951 sshd[4699]: pam_unix(sshd:session): session closed for user core Jul 2 00:49:42.879000 audit[4699]: USER_END pid=4699 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:42.879000 audit[4699]: CRED_DISP pid=4699 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:42.885690 systemd[1]: Started sshd@16-10.0.0.149:22-10.0.0.1:40286.service - OpenSSH per-connection server daemon (10.0.0.1:40286). Jul 2 00:49:42.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.149:22-10.0.0.1:40286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:49:42.886309 systemd[1]: sshd@15-10.0.0.149:22-10.0.0.1:40276.service: Deactivated successfully. Jul 2 00:49:42.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.149:22-10.0.0.1:40276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:49:42.887509 systemd-logind[1334]: Session 16 logged out. Waiting for processes to exit. Jul 2 00:49:42.887559 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 00:49:42.888927 systemd-logind[1334]: Removed session 16. Jul 2 00:49:42.921000 audit[4714]: USER_ACCT pid=4714 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:42.922479 sshd[4714]: Accepted publickey for core from 10.0.0.1 port 40286 ssh2: RSA SHA256:Kp5j1k40JRjOvdBxgA97BAtl42hDR/cCrPEvDSx0HPk Jul 2 00:49:42.922000 audit[4714]: CRED_ACQ pid=4714 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:42.922000 audit[4714]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcb7fb330 a2=3 a3=1 items=0 ppid=1 pid=4714 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:42.922000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 00:49:42.923808 sshd[4714]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:49:42.927433 systemd-logind[1334]: New session 17 of user core. Jul 2 00:49:42.932484 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 00:49:42.934000 audit[4714]: USER_START pid=4714 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:42.936000 audit[4719]: CRED_ACQ pid=4719 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:43.075068 sshd[4714]: pam_unix(sshd:session): session closed for user core Jul 2 00:49:43.074000 audit[4714]: USER_END pid=4714 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:43.074000 audit[4714]: CRED_DISP pid=4714 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:43.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.149:22-10.0.0.1:40286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:49:43.078518 systemd[1]: sshd@16-10.0.0.149:22-10.0.0.1:40286.service: Deactivated successfully. Jul 2 00:49:43.079756 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 00:49:43.080182 systemd-logind[1334]: Session 17 logged out. Waiting for processes to exit. Jul 2 00:49:43.081000 systemd-logind[1334]: Removed session 17. Jul 2 00:49:48.087726 systemd[1]: Started sshd@17-10.0.0.149:22-10.0.0.1:40292.service - OpenSSH per-connection server daemon (10.0.0.1:40292). Jul 2 00:49:48.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.149:22-10.0.0.1:40292 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:49:48.090900 kernel: kauditd_printk_skb: 57 callbacks suppressed Jul 2 00:49:48.090990 kernel: audit: type=1130 audit(1719881388.086:407): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.149:22-10.0.0.1:40292 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:49:48.117000 audit[4743]: USER_ACCT pid=4743 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:48.119128 sshd[4743]: Accepted publickey for core from 10.0.0.1 port 40292 ssh2: RSA SHA256:Kp5j1k40JRjOvdBxgA97BAtl42hDR/cCrPEvDSx0HPk Jul 2 00:49:48.120447 sshd[4743]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:49:48.118000 audit[4743]: CRED_ACQ pid=4743 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:48.123369 kernel: audit: type=1101 audit(1719881388.117:408): pid=4743 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:48.123415 kernel: audit: type=1103 audit(1719881388.118:409): pid=4743 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:48.125019 kernel: audit: type=1006 audit(1719881388.118:410): pid=4743 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jul 2 00:49:48.118000 audit[4743]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe2da5c30 a2=3 a3=1 items=0 ppid=1 pid=4743 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:48.127705 kernel: audit: type=1300 audit(1719881388.118:410): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe2da5c30 a2=3 a3=1 items=0 ppid=1 pid=4743 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:48.118000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 00:49:48.128888 kernel: audit: type=1327 audit(1719881388.118:410): proctitle=737368643A20636F7265205B707269765D Jul 2 00:49:48.129460 systemd-logind[1334]: New session 18 of user core. Jul 2 00:49:48.135491 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 00:49:48.137000 audit[4743]: USER_START pid=4743 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:48.138000 audit[4746]: CRED_ACQ pid=4746 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:48.144165 kernel: audit: type=1105 audit(1719881388.137:411): pid=4743 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:48.144228 kernel: audit: type=1103 audit(1719881388.138:412): pid=4746 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:48.260470 sshd[4743]: pam_unix(sshd:session): session closed for user core Jul 2 00:49:48.263000 audit[4743]: USER_END pid=4743 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:48.263000 audit[4743]: CRED_DISP pid=4743 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:48.266265 systemd[1]: sshd@17-10.0.0.149:22-10.0.0.1:40292.service: Deactivated successfully. Jul 2 00:49:48.267422 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 00:49:48.267863 systemd-logind[1334]: Session 18 logged out. Waiting for processes to exit. Jul 2 00:49:48.268665 systemd-logind[1334]: Removed session 18. Jul 2 00:49:48.269641 kernel: audit: type=1106 audit(1719881388.263:413): pid=4743 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:48.269843 kernel: audit: type=1104 audit(1719881388.263:414): pid=4743 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:48.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.149:22-10.0.0.1:40292 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:49:48.749000 audit[4759]: NETFILTER_CFG table=filter:116 family=2 entries=20 op=nft_register_rule pid=4759 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 00:49:48.749000 audit[4759]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffe83bca50 a2=0 a3=1 items=0 ppid=2587 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:48.749000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 00:49:48.751000 audit[4759]: NETFILTER_CFG table=nat:117 family=2 entries=104 op=nft_register_chain pid=4759 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 00:49:48.751000 audit[4759]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffe83bca50 a2=0 a3=1 items=0 ppid=2587 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:48.751000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 00:49:49.150520 kubelet[2384]: I0702 00:49:49.150467 2384 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 00:49:49.180905 systemd[1]: run-containerd-runc-k8s.io-959449dacf814a7782f7f53f65dfa3692100d993be4541b2f81a11a279a01abb-runc.xsjVzh.mount: Deactivated successfully. Jul 2 00:49:49.230285 kubelet[2384]: I0702 00:49:49.229832 2384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-dczfn" podStartSLOduration=34.692896145 podCreationTimestamp="2024-07-02 00:49:12 +0000 UTC" firstStartedPulling="2024-07-02 00:49:33.378509623 +0000 UTC m=+41.523509753" lastFinishedPulling="2024-07-02 00:49:35.91540467 +0000 UTC m=+44.060404840" observedRunningTime="2024-07-02 00:49:36.214374587 +0000 UTC m=+44.359374797" watchObservedRunningTime="2024-07-02 00:49:49.229791232 +0000 UTC m=+57.374791442" Jul 2 00:49:51.944650 containerd[1351]: time="2024-07-02T00:49:51.944607578Z" level=info msg="StopPodSandbox for \"ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c\"" Jul 2 00:49:52.021252 containerd[1351]: 2024-07-02 00:49:51.981 [WARNING][4814] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--b89tp-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"4e4dbead-dbec-4913-9f82-ae7fdc8be31c", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 49, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1857463cb4e4f1fdd353d03dc57fcc263ed1c5e2cc03446ddfafb6bb3e460c2a", Pod:"coredns-5dd5756b68-b89tp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali718d2666cf6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:49:52.021252 containerd[1351]: 2024-07-02 00:49:51.981 [INFO][4814] k8s.go 608: Cleaning up netns ContainerID="ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c" Jul 2 00:49:52.021252 containerd[1351]: 2024-07-02 00:49:51.981 [INFO][4814] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c" iface="eth0" netns="" Jul 2 00:49:52.021252 containerd[1351]: 2024-07-02 00:49:51.981 [INFO][4814] k8s.go 615: Releasing IP address(es) ContainerID="ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c" Jul 2 00:49:52.021252 containerd[1351]: 2024-07-02 00:49:51.981 [INFO][4814] utils.go 188: Calico CNI releasing IP address ContainerID="ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c" Jul 2 00:49:52.021252 containerd[1351]: 2024-07-02 00:49:52.004 [INFO][4821] ipam_plugin.go 411: Releasing address using handleID ContainerID="ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c" HandleID="k8s-pod-network.ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c" Workload="localhost-k8s-coredns--5dd5756b68--b89tp-eth0" Jul 2 00:49:52.021252 containerd[1351]: 2024-07-02 00:49:52.004 [INFO][4821] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:49:52.021252 containerd[1351]: 2024-07-02 00:49:52.004 [INFO][4821] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:49:52.021252 containerd[1351]: 2024-07-02 00:49:52.013 [WARNING][4821] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c" HandleID="k8s-pod-network.ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c" Workload="localhost-k8s-coredns--5dd5756b68--b89tp-eth0" Jul 2 00:49:52.021252 containerd[1351]: 2024-07-02 00:49:52.013 [INFO][4821] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c" HandleID="k8s-pod-network.ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c" Workload="localhost-k8s-coredns--5dd5756b68--b89tp-eth0" Jul 2 00:49:52.021252 containerd[1351]: 2024-07-02 00:49:52.015 [INFO][4821] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:49:52.021252 containerd[1351]: 2024-07-02 00:49:52.019 [INFO][4814] k8s.go 621: Teardown processing complete. ContainerID="ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c" Jul 2 00:49:52.021670 containerd[1351]: time="2024-07-02T00:49:52.021294215Z" level=info msg="TearDown network for sandbox \"ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c\" successfully" Jul 2 00:49:52.021670 containerd[1351]: time="2024-07-02T00:49:52.021324816Z" level=info msg="StopPodSandbox for \"ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c\" returns successfully" Jul 2 00:49:52.022742 containerd[1351]: time="2024-07-02T00:49:52.022690724Z" level=info msg="RemovePodSandbox for \"ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c\"" Jul 2 00:49:52.029839 containerd[1351]: time="2024-07-02T00:49:52.022735606Z" level=info msg="Forcibly stopping sandbox \"ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c\"" Jul 2 00:49:52.096708 containerd[1351]: 2024-07-02 00:49:52.065 [WARNING][4846] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--b89tp-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"4e4dbead-dbec-4913-9f82-ae7fdc8be31c", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 49, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1857463cb4e4f1fdd353d03dc57fcc263ed1c5e2cc03446ddfafb6bb3e460c2a", Pod:"coredns-5dd5756b68-b89tp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali718d2666cf6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:49:52.096708 containerd[1351]: 2024-07-02 00:49:52.065 [INFO][4846] k8s.go 608: Cleaning up netns ContainerID="ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c" Jul 2 00:49:52.096708 containerd[1351]: 2024-07-02 00:49:52.065 [INFO][4846] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c" iface="eth0" netns="" Jul 2 00:49:52.096708 containerd[1351]: 2024-07-02 00:49:52.065 [INFO][4846] k8s.go 615: Releasing IP address(es) ContainerID="ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c" Jul 2 00:49:52.096708 containerd[1351]: 2024-07-02 00:49:52.065 [INFO][4846] utils.go 188: Calico CNI releasing IP address ContainerID="ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c" Jul 2 00:49:52.096708 containerd[1351]: 2024-07-02 00:49:52.082 [INFO][4854] ipam_plugin.go 411: Releasing address using handleID ContainerID="ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c" HandleID="k8s-pod-network.ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c" Workload="localhost-k8s-coredns--5dd5756b68--b89tp-eth0" Jul 2 00:49:52.096708 containerd[1351]: 2024-07-02 00:49:52.082 [INFO][4854] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:49:52.096708 containerd[1351]: 2024-07-02 00:49:52.082 [INFO][4854] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:49:52.096708 containerd[1351]: 2024-07-02 00:49:52.092 [WARNING][4854] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c" HandleID="k8s-pod-network.ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c" Workload="localhost-k8s-coredns--5dd5756b68--b89tp-eth0" Jul 2 00:49:52.096708 containerd[1351]: 2024-07-02 00:49:52.092 [INFO][4854] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c" HandleID="k8s-pod-network.ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c" Workload="localhost-k8s-coredns--5dd5756b68--b89tp-eth0" Jul 2 00:49:52.096708 containerd[1351]: 2024-07-02 00:49:52.093 [INFO][4854] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:49:52.096708 containerd[1351]: 2024-07-02 00:49:52.095 [INFO][4846] k8s.go 621: Teardown processing complete. ContainerID="ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c" Jul 2 00:49:52.097137 containerd[1351]: time="2024-07-02T00:49:52.096737486Z" level=info msg="TearDown network for sandbox \"ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c\" successfully" Jul 2 00:49:52.099937 containerd[1351]: time="2024-07-02T00:49:52.099897283Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:49:52.099984 containerd[1351]: time="2024-07-02T00:49:52.099963566Z" level=info msg="RemovePodSandbox \"ace764a8897584804c821b829918c35ea70a3df5bb2c26d4cb18288c9a86c75c\" returns successfully" Jul 2 00:49:52.100517 containerd[1351]: time="2024-07-02T00:49:52.100490432Z" level=info msg="StopPodSandbox for \"c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05\"" Jul 2 00:49:52.164271 containerd[1351]: 2024-07-02 00:49:52.132 [WARNING][4877] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dczfn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"565b958b-5d9c-4fe5-96de-2157ed8f17c7", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 49, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"df84471630a341de13b454ddb9b009e77d7ec326b9b967af793a816f3cfa8694", Pod:"csi-node-driver-dczfn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali5977f9ce63e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:49:52.164271 containerd[1351]: 2024-07-02 00:49:52.133 [INFO][4877] k8s.go 608: Cleaning up netns ContainerID="c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05" Jul 2 00:49:52.164271 containerd[1351]: 2024-07-02 00:49:52.133 [INFO][4877] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05" iface="eth0" netns="" Jul 2 00:49:52.164271 containerd[1351]: 2024-07-02 00:49:52.133 [INFO][4877] k8s.go 615: Releasing IP address(es) ContainerID="c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05" Jul 2 00:49:52.164271 containerd[1351]: 2024-07-02 00:49:52.133 [INFO][4877] utils.go 188: Calico CNI releasing IP address ContainerID="c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05" Jul 2 00:49:52.164271 containerd[1351]: 2024-07-02 00:49:52.151 [INFO][4885] ipam_plugin.go 411: Releasing address using handleID ContainerID="c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05" HandleID="k8s-pod-network.c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05" Workload="localhost-k8s-csi--node--driver--dczfn-eth0" Jul 2 00:49:52.164271 containerd[1351]: 2024-07-02 00:49:52.151 [INFO][4885] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:49:52.164271 containerd[1351]: 2024-07-02 00:49:52.151 [INFO][4885] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:49:52.164271 containerd[1351]: 2024-07-02 00:49:52.159 [WARNING][4885] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05" HandleID="k8s-pod-network.c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05" Workload="localhost-k8s-csi--node--driver--dczfn-eth0" Jul 2 00:49:52.164271 containerd[1351]: 2024-07-02 00:49:52.159 [INFO][4885] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05" HandleID="k8s-pod-network.c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05" Workload="localhost-k8s-csi--node--driver--dczfn-eth0" Jul 2 00:49:52.164271 containerd[1351]: 2024-07-02 00:49:52.161 [INFO][4885] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:49:52.164271 containerd[1351]: 2024-07-02 00:49:52.162 [INFO][4877] k8s.go 621: Teardown processing complete. ContainerID="c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05" Jul 2 00:49:52.164680 containerd[1351]: time="2024-07-02T00:49:52.164302445Z" level=info msg="TearDown network for sandbox \"c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05\" successfully" Jul 2 00:49:52.164680 containerd[1351]: time="2024-07-02T00:49:52.164333247Z" level=info msg="StopPodSandbox for \"c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05\" returns successfully" Jul 2 00:49:52.164838 containerd[1351]: time="2024-07-02T00:49:52.164790429Z" level=info msg="RemovePodSandbox for \"c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05\"" Jul 2 00:49:52.164881 containerd[1351]: time="2024-07-02T00:49:52.164829591Z" level=info msg="Forcibly stopping sandbox \"c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05\"" Jul 2 00:49:52.243441 containerd[1351]: 2024-07-02 00:49:52.210 [WARNING][4908] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dczfn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"565b958b-5d9c-4fe5-96de-2157ed8f17c7", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 49, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"df84471630a341de13b454ddb9b009e77d7ec326b9b967af793a816f3cfa8694", Pod:"csi-node-driver-dczfn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali5977f9ce63e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:49:52.243441 containerd[1351]: 2024-07-02 00:49:52.210 [INFO][4908] k8s.go 608: Cleaning up netns ContainerID="c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05" Jul 2 00:49:52.243441 containerd[1351]: 2024-07-02 00:49:52.210 [INFO][4908] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05" iface="eth0" netns="" Jul 2 00:49:52.243441 containerd[1351]: 2024-07-02 00:49:52.210 [INFO][4908] k8s.go 615: Releasing IP address(es) ContainerID="c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05" Jul 2 00:49:52.243441 containerd[1351]: 2024-07-02 00:49:52.210 [INFO][4908] utils.go 188: Calico CNI releasing IP address ContainerID="c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05" Jul 2 00:49:52.243441 containerd[1351]: 2024-07-02 00:49:52.229 [INFO][4916] ipam_plugin.go 411: Releasing address using handleID ContainerID="c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05" HandleID="k8s-pod-network.c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05" Workload="localhost-k8s-csi--node--driver--dczfn-eth0" Jul 2 00:49:52.243441 containerd[1351]: 2024-07-02 00:49:52.229 [INFO][4916] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:49:52.243441 containerd[1351]: 2024-07-02 00:49:52.229 [INFO][4916] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:49:52.243441 containerd[1351]: 2024-07-02 00:49:52.237 [WARNING][4916] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05" HandleID="k8s-pod-network.c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05" Workload="localhost-k8s-csi--node--driver--dczfn-eth0" Jul 2 00:49:52.243441 containerd[1351]: 2024-07-02 00:49:52.237 [INFO][4916] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05" HandleID="k8s-pod-network.c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05" Workload="localhost-k8s-csi--node--driver--dczfn-eth0" Jul 2 00:49:52.243441 containerd[1351]: 2024-07-02 00:49:52.239 [INFO][4916] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:49:52.243441 containerd[1351]: 2024-07-02 00:49:52.240 [INFO][4908] k8s.go 621: Teardown processing complete. ContainerID="c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05" Jul 2 00:49:52.243441 containerd[1351]: time="2024-07-02T00:49:52.242346206Z" level=info msg="TearDown network for sandbox \"c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05\" successfully" Jul 2 00:49:52.245027 containerd[1351]: time="2024-07-02T00:49:52.244996817Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:49:52.245107 containerd[1351]: time="2024-07-02T00:49:52.245051100Z" level=info msg="RemovePodSandbox \"c037c3f7d2a697ea5c90894e8c04ec0c4762d140255dadd7a9b4ba1778430b05\" returns successfully" Jul 2 00:49:52.245398 containerd[1351]: time="2024-07-02T00:49:52.245368596Z" level=info msg="StopPodSandbox for \"1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a\"" Jul 2 00:49:52.309239 containerd[1351]: 2024-07-02 00:49:52.279 [WARNING][4938] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--qwwn6-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"c5486d5f-6264-40ba-9f70-21d17d9388ed", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 49, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"298468d3f9bdcbb99ba4d52fc1af55fc94c558a613bd1317358a90d63a6899e8", Pod:"coredns-5dd5756b68-qwwn6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1f8645b4ce9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:49:52.309239 containerd[1351]: 2024-07-02 00:49:52.279 [INFO][4938] k8s.go 608: Cleaning up netns ContainerID="1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a" Jul 2 00:49:52.309239 containerd[1351]: 2024-07-02 00:49:52.279 [INFO][4938] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a" iface="eth0" netns="" Jul 2 00:49:52.309239 containerd[1351]: 2024-07-02 00:49:52.279 [INFO][4938] k8s.go 615: Releasing IP address(es) ContainerID="1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a" Jul 2 00:49:52.309239 containerd[1351]: 2024-07-02 00:49:52.279 [INFO][4938] utils.go 188: Calico CNI releasing IP address ContainerID="1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a" Jul 2 00:49:52.309239 containerd[1351]: 2024-07-02 00:49:52.296 [INFO][4946] ipam_plugin.go 411: Releasing address using handleID ContainerID="1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a" HandleID="k8s-pod-network.1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a" Workload="localhost-k8s-coredns--5dd5756b68--qwwn6-eth0" Jul 2 00:49:52.309239 containerd[1351]: 2024-07-02 00:49:52.296 [INFO][4946] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:49:52.309239 containerd[1351]: 2024-07-02 00:49:52.296 [INFO][4946] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:49:52.309239 containerd[1351]: 2024-07-02 00:49:52.304 [WARNING][4946] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a" HandleID="k8s-pod-network.1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a" Workload="localhost-k8s-coredns--5dd5756b68--qwwn6-eth0" Jul 2 00:49:52.309239 containerd[1351]: 2024-07-02 00:49:52.304 [INFO][4946] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a" HandleID="k8s-pod-network.1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a" Workload="localhost-k8s-coredns--5dd5756b68--qwwn6-eth0" Jul 2 00:49:52.309239 containerd[1351]: 2024-07-02 00:49:52.306 [INFO][4946] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:49:52.309239 containerd[1351]: 2024-07-02 00:49:52.307 [INFO][4938] k8s.go 621: Teardown processing complete. ContainerID="1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a" Jul 2 00:49:52.309683 containerd[1351]: time="2024-07-02T00:49:52.309288294Z" level=info msg="TearDown network for sandbox \"1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a\" successfully" Jul 2 00:49:52.309683 containerd[1351]: time="2024-07-02T00:49:52.309317736Z" level=info msg="StopPodSandbox for \"1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a\" returns successfully" Jul 2 00:49:52.310133 containerd[1351]: time="2024-07-02T00:49:52.310103735Z" level=info msg="RemovePodSandbox for \"1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a\"" Jul 2 00:49:52.310208 containerd[1351]: time="2024-07-02T00:49:52.310136816Z" level=info msg="Forcibly stopping sandbox \"1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a\"" Jul 2 00:49:52.382952 containerd[1351]: 2024-07-02 00:49:52.350 [WARNING][4969] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--qwwn6-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"c5486d5f-6264-40ba-9f70-21d17d9388ed", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 49, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"298468d3f9bdcbb99ba4d52fc1af55fc94c558a613bd1317358a90d63a6899e8", Pod:"coredns-5dd5756b68-qwwn6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1f8645b4ce9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:49:52.382952 containerd[1351]: 2024-07-02 00:49:52.350 [INFO][4969] k8s.go 608: Cleaning up netns ContainerID="1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a" Jul 2 00:49:52.382952 containerd[1351]: 2024-07-02 00:49:52.350 [INFO][4969] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a" iface="eth0" netns="" Jul 2 00:49:52.382952 containerd[1351]: 2024-07-02 00:49:52.350 [INFO][4969] k8s.go 615: Releasing IP address(es) ContainerID="1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a" Jul 2 00:49:52.382952 containerd[1351]: 2024-07-02 00:49:52.350 [INFO][4969] utils.go 188: Calico CNI releasing IP address ContainerID="1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a" Jul 2 00:49:52.382952 containerd[1351]: 2024-07-02 00:49:52.370 [INFO][4977] ipam_plugin.go 411: Releasing address using handleID ContainerID="1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a" HandleID="k8s-pod-network.1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a" Workload="localhost-k8s-coredns--5dd5756b68--qwwn6-eth0" Jul 2 00:49:52.382952 containerd[1351]: 2024-07-02 00:49:52.370 [INFO][4977] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:49:52.382952 containerd[1351]: 2024-07-02 00:49:52.370 [INFO][4977] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:49:52.382952 containerd[1351]: 2024-07-02 00:49:52.378 [WARNING][4977] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a" HandleID="k8s-pod-network.1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a" Workload="localhost-k8s-coredns--5dd5756b68--qwwn6-eth0" Jul 2 00:49:52.382952 containerd[1351]: 2024-07-02 00:49:52.378 [INFO][4977] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a" HandleID="k8s-pod-network.1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a" Workload="localhost-k8s-coredns--5dd5756b68--qwwn6-eth0" Jul 2 00:49:52.382952 containerd[1351]: 2024-07-02 00:49:52.379 [INFO][4977] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:49:52.382952 containerd[1351]: 2024-07-02 00:49:52.381 [INFO][4969] k8s.go 621: Teardown processing complete. ContainerID="1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a" Jul 2 00:49:52.383381 containerd[1351]: time="2024-07-02T00:49:52.382980638Z" level=info msg="TearDown network for sandbox \"1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a\" successfully" Jul 2 00:49:52.389361 containerd[1351]: time="2024-07-02T00:49:52.389300032Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:49:52.389481 containerd[1351]: time="2024-07-02T00:49:52.389363716Z" level=info msg="RemovePodSandbox \"1f09fb0dcc7791f70158c95546f1175c173289518fc8d4e5d75755f2e6d3ba5a\" returns successfully" Jul 2 00:49:52.389793 containerd[1351]: time="2024-07-02T00:49:52.389769376Z" level=info msg="StopPodSandbox for \"1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090\"" Jul 2 00:49:52.457025 containerd[1351]: 2024-07-02 00:49:52.425 [WARNING][5000] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--f49f44d4b--xn8pc-eth0", GenerateName:"calico-kube-controllers-f49f44d4b-", Namespace:"calico-system", SelfLink:"", UID:"72fe4de1-b2e6-441a-b885-9fce4e3cebac", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 49, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f49f44d4b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8b638965aa730f99f3ec7c0b8f17ea443872f5f62b97cbfa56be4835a9e21ce6", Pod:"calico-kube-controllers-f49f44d4b-xn8pc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1428750659c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:49:52.457025 containerd[1351]: 2024-07-02 00:49:52.425 [INFO][5000] k8s.go 608: Cleaning up netns ContainerID="1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090" Jul 2 00:49:52.457025 containerd[1351]: 2024-07-02 00:49:52.425 [INFO][5000] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090" iface="eth0" netns="" Jul 2 00:49:52.457025 containerd[1351]: 2024-07-02 00:49:52.425 [INFO][5000] k8s.go 615: Releasing IP address(es) ContainerID="1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090" Jul 2 00:49:52.457025 containerd[1351]: 2024-07-02 00:49:52.425 [INFO][5000] utils.go 188: Calico CNI releasing IP address ContainerID="1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090" Jul 2 00:49:52.457025 containerd[1351]: 2024-07-02 00:49:52.444 [INFO][5007] ipam_plugin.go 411: Releasing address using handleID ContainerID="1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090" HandleID="k8s-pod-network.1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090" Workload="localhost-k8s-calico--kube--controllers--f49f44d4b--xn8pc-eth0" Jul 2 00:49:52.457025 containerd[1351]: 2024-07-02 00:49:52.444 [INFO][5007] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:49:52.457025 containerd[1351]: 2024-07-02 00:49:52.444 [INFO][5007] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:49:52.457025 containerd[1351]: 2024-07-02 00:49:52.452 [WARNING][5007] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090" HandleID="k8s-pod-network.1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090" Workload="localhost-k8s-calico--kube--controllers--f49f44d4b--xn8pc-eth0" Jul 2 00:49:52.457025 containerd[1351]: 2024-07-02 00:49:52.452 [INFO][5007] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090" HandleID="k8s-pod-network.1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090" Workload="localhost-k8s-calico--kube--controllers--f49f44d4b--xn8pc-eth0" Jul 2 00:49:52.457025 containerd[1351]: 2024-07-02 00:49:52.454 [INFO][5007] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:49:52.457025 containerd[1351]: 2024-07-02 00:49:52.455 [INFO][5000] k8s.go 621: Teardown processing complete. ContainerID="1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090" Jul 2 00:49:52.457551 containerd[1351]: time="2024-07-02T00:49:52.457516264Z" level=info msg="TearDown network for sandbox \"1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090\" successfully" Jul 2 00:49:52.457611 containerd[1351]: time="2024-07-02T00:49:52.457596868Z" level=info msg="StopPodSandbox for \"1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090\" returns successfully" Jul 2 00:49:52.458216 containerd[1351]: time="2024-07-02T00:49:52.458175457Z" level=info msg="RemovePodSandbox for \"1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090\"" Jul 2 00:49:52.458401 containerd[1351]: time="2024-07-02T00:49:52.458344105Z" level=info msg="Forcibly stopping sandbox \"1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090\"" Jul 2 00:49:52.525528 containerd[1351]: 2024-07-02 00:49:52.490 [WARNING][5029] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--f49f44d4b--xn8pc-eth0", GenerateName:"calico-kube-controllers-f49f44d4b-", Namespace:"calico-system", SelfLink:"", UID:"72fe4de1-b2e6-441a-b885-9fce4e3cebac", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 49, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f49f44d4b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8b638965aa730f99f3ec7c0b8f17ea443872f5f62b97cbfa56be4835a9e21ce6", Pod:"calico-kube-controllers-f49f44d4b-xn8pc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1428750659c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:49:52.525528 containerd[1351]: 2024-07-02 00:49:52.490 [INFO][5029] k8s.go 608: Cleaning up netns ContainerID="1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090" Jul 2 00:49:52.525528 containerd[1351]: 2024-07-02 00:49:52.490 [INFO][5029] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090" iface="eth0" netns="" Jul 2 00:49:52.525528 containerd[1351]: 2024-07-02 00:49:52.490 [INFO][5029] k8s.go 615: Releasing IP address(es) ContainerID="1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090" Jul 2 00:49:52.525528 containerd[1351]: 2024-07-02 00:49:52.490 [INFO][5029] utils.go 188: Calico CNI releasing IP address ContainerID="1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090" Jul 2 00:49:52.525528 containerd[1351]: 2024-07-02 00:49:52.511 [INFO][5036] ipam_plugin.go 411: Releasing address using handleID ContainerID="1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090" HandleID="k8s-pod-network.1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090" Workload="localhost-k8s-calico--kube--controllers--f49f44d4b--xn8pc-eth0" Jul 2 00:49:52.525528 containerd[1351]: 2024-07-02 00:49:52.511 [INFO][5036] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:49:52.525528 containerd[1351]: 2024-07-02 00:49:52.511 [INFO][5036] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:49:52.525528 containerd[1351]: 2024-07-02 00:49:52.519 [WARNING][5036] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090" HandleID="k8s-pod-network.1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090" Workload="localhost-k8s-calico--kube--controllers--f49f44d4b--xn8pc-eth0" Jul 2 00:49:52.525528 containerd[1351]: 2024-07-02 00:49:52.519 [INFO][5036] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090" HandleID="k8s-pod-network.1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090" Workload="localhost-k8s-calico--kube--controllers--f49f44d4b--xn8pc-eth0" Jul 2 00:49:52.525528 containerd[1351]: 2024-07-02 00:49:52.521 [INFO][5036] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:49:52.525528 containerd[1351]: 2024-07-02 00:49:52.522 [INFO][5029] k8s.go 621: Teardown processing complete. ContainerID="1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090" Jul 2 00:49:52.526066 containerd[1351]: time="2024-07-02T00:49:52.526027671Z" level=info msg="TearDown network for sandbox \"1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090\" successfully" Jul 2 00:49:52.528980 containerd[1351]: time="2024-07-02T00:49:52.528945336Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:49:52.529151 containerd[1351]: time="2024-07-02T00:49:52.529130025Z" level=info msg="RemovePodSandbox \"1ecfc300a2f83cc8a4c2d39af9b0bb2095169a6859ebea8b013cd6fe6ad76090\" returns successfully" Jul 2 00:49:53.276674 systemd[1]: Started sshd@18-10.0.0.149:22-10.0.0.1:36250.service - OpenSSH per-connection server daemon (10.0.0.1:36250). Jul 2 00:49:53.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.149:22-10.0.0.1:36250 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:49:53.277354 kernel: kauditd_printk_skb: 7 callbacks suppressed Jul 2 00:49:53.277417 kernel: audit: type=1130 audit(1719881393.275:418): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.149:22-10.0.0.1:36250 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:49:53.312000 audit[5044]: USER_ACCT pid=5044 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:53.313696 sshd[5044]: Accepted publickey for core from 10.0.0.1 port 36250 ssh2: RSA SHA256:Kp5j1k40JRjOvdBxgA97BAtl42hDR/cCrPEvDSx0HPk Jul 2 00:49:53.315615 sshd[5044]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:49:53.313000 audit[5044]: CRED_ACQ pid=5044 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:53.318219 kernel: audit: type=1101 audit(1719881393.312:419): pid=5044 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:53.318290 kernel: audit: type=1103 audit(1719881393.313:420): pid=5044 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:53.318315 kernel: audit: type=1006 audit(1719881393.313:421): pid=5044 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jul 2 00:49:53.319568 kernel: audit: type=1300 audit(1719881393.313:421): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffcbad8c0 a2=3 a3=1 items=0 ppid=1 pid=5044 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:53.313000 audit[5044]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffcbad8c0 a2=3 a3=1 items=0 ppid=1 pid=5044 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:53.320395 systemd-logind[1334]: New session 19 of user core. Jul 2 00:49:53.331490 kernel: audit: type=1327 audit(1719881393.313:421): proctitle=737368643A20636F7265205B707269765D Jul 2 00:49:53.313000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 00:49:53.331592 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 00:49:53.336000 audit[5044]: USER_START pid=5044 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:53.338000 audit[5047]: CRED_ACQ pid=5047 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:53.342945 kernel: audit: type=1105 audit(1719881393.336:422): pid=5044 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:53.342989 kernel: audit: type=1103 audit(1719881393.338:423): pid=5047 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:53.472163 sshd[5044]: pam_unix(sshd:session): session closed for user core Jul 2 00:49:53.471000 audit[5044]: USER_END pid=5044 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:53.474974 systemd[1]: sshd@18-10.0.0.149:22-10.0.0.1:36250.service: Deactivated successfully. Jul 2 00:49:53.475143 systemd-logind[1334]: Session 19 logged out. Waiting for processes to exit. Jul 2 00:49:53.475834 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 00:49:53.472000 audit[5044]: CRED_DISP pid=5044 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:53.476407 systemd-logind[1334]: Removed session 19. Jul 2 00:49:53.478055 kernel: audit: type=1106 audit(1719881393.471:424): pid=5044 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:53.478137 kernel: audit: type=1104 audit(1719881393.472:425): pid=5044 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:53.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.149:22-10.0.0.1:36250 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:49:58.484618 systemd[1]: Started sshd@19-10.0.0.149:22-10.0.0.1:36262.service - OpenSSH per-connection server daemon (10.0.0.1:36262). Jul 2 00:49:58.488064 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 2 00:49:58.488137 kernel: audit: type=1130 audit(1719881398.483:427): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.149:22-10.0.0.1:36262 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:49:58.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.149:22-10.0.0.1:36262 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:49:58.513000 audit[5072]: USER_ACCT pid=5072 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:58.514498 sshd[5072]: Accepted publickey for core from 10.0.0.1 port 36262 ssh2: RSA SHA256:Kp5j1k40JRjOvdBxgA97BAtl42hDR/cCrPEvDSx0HPk Jul 2 00:49:58.513000 audit[5072]: CRED_ACQ pid=5072 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:58.515570 sshd[5072]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:49:58.523426 kernel: audit: type=1101 audit(1719881398.513:428): pid=5072 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:58.523505 kernel: audit: type=1103 audit(1719881398.513:429): pid=5072 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:58.523523 kernel: audit: type=1006 audit(1719881398.514:430): pid=5072 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jul 2 00:49:58.523537 kernel: audit: type=1300 audit(1719881398.514:430): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc5449cf0 a2=3 a3=1 items=0 ppid=1 pid=5072 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:58.514000 audit[5072]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc5449cf0 a2=3 a3=1 items=0 ppid=1 pid=5072 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:49:58.521217 systemd-logind[1334]: New session 20 of user core. Jul 2 00:49:58.514000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 00:49:58.527552 kernel: audit: type=1327 audit(1719881398.514:430): proctitle=737368643A20636F7265205B707269765D Jul 2 00:49:58.532563 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 00:49:58.535000 audit[5072]: USER_START pid=5072 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:58.539925 kernel: audit: type=1105 audit(1719881398.535:431): pid=5072 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:58.538000 audit[5075]: CRED_ACQ pid=5075 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:58.543274 kernel: audit: type=1103 audit(1719881398.538:432): pid=5075 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:58.672738 sshd[5072]: pam_unix(sshd:session): session closed for user core Jul 2 00:49:58.672000 audit[5072]: USER_END pid=5072 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:58.675942 systemd[1]: sshd@19-10.0.0.149:22-10.0.0.1:36262.service: Deactivated successfully. Jul 2 00:49:58.672000 audit[5072]: CRED_DISP pid=5072 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:58.676777 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 00:49:58.676826 systemd-logind[1334]: Session 20 logged out. Waiting for processes to exit. Jul 2 00:49:58.677613 systemd-logind[1334]: Removed session 20. Jul 2 00:49:58.679953 kernel: audit: type=1106 audit(1719881398.672:433): pid=5072 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:58.680028 kernel: audit: type=1104 audit(1719881398.672:434): pid=5072 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 00:49:58.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.149:22-10.0.0.1:36262 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:50:00.258166 systemd[1]: run-containerd-runc-k8s.io-352b5c5995b2904e5707fd21fbdb0499d65c503bd99a4e485687a07697e7fc87-runc.iZBxHp.mount: Deactivated successfully. Jul 2 00:50:00.315136 kubelet[2384]: E0702 00:50:00.315091 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"