Feb 9 09:42:19.740480 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 9 09:42:19.740501 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 08:56:26 -00 2024 Feb 9 09:42:19.740509 kernel: efi: EFI v2.70 by EDK II Feb 9 09:42:19.740515 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Feb 9 09:42:19.740520 kernel: random: crng init done Feb 9 09:42:19.740525 kernel: ACPI: Early table checksum verification disabled Feb 9 09:42:19.740532 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Feb 9 09:42:19.740538 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 9 09:42:19.740544 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:42:19.740549 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:42:19.740555 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:42:19.740560 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:42:19.740566 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:42:19.740571 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:42:19.740579 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:42:19.740585 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:42:19.740591 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 09:42:19.740596 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 9 09:42:19.740602 kernel: NUMA: Failed to initialise from firmware Feb 9 09:42:19.740608 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 09:42:19.740614 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Feb 9 09:42:19.740620 kernel: Zone ranges: Feb 9 09:42:19.740625 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 09:42:19.740632 kernel: DMA32 empty Feb 9 09:42:19.740637 kernel: Normal empty Feb 9 09:42:19.740643 kernel: Movable zone start for each node Feb 9 09:42:19.740649 kernel: Early memory node ranges Feb 9 09:42:19.740654 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Feb 9 09:42:19.740660 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Feb 9 09:42:19.740665 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Feb 9 09:42:19.740671 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Feb 9 09:42:19.740677 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Feb 9 09:42:19.740682 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Feb 9 09:42:19.740688 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Feb 9 09:42:19.740693 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 09:42:19.740700 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 9 09:42:19.740706 kernel: psci: probing for conduit method from ACPI. Feb 9 09:42:19.740712 kernel: psci: PSCIv1.1 detected in firmware. Feb 9 09:42:19.740717 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 09:42:19.740723 kernel: psci: Trusted OS migration not required Feb 9 09:42:19.740731 kernel: psci: SMC Calling Convention v1.1 Feb 9 09:42:19.740737 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 9 09:42:19.740745 kernel: ACPI: SRAT not present Feb 9 09:42:19.740751 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 09:42:19.740757 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 09:42:19.740763 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 9 09:42:19.740769 kernel: Detected PIPT I-cache on CPU0 Feb 9 09:42:19.740775 kernel: CPU features: detected: GIC system register CPU interface Feb 9 09:42:19.740781 kernel: CPU features: detected: Hardware dirty bit management Feb 9 09:42:19.740787 kernel: CPU features: detected: Spectre-v4 Feb 9 09:42:19.740793 kernel: CPU features: detected: Spectre-BHB Feb 9 09:42:19.740801 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 09:42:19.740807 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 09:42:19.740813 kernel: CPU features: detected: ARM erratum 1418040 Feb 9 09:42:19.740819 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 9 09:42:19.740825 kernel: Policy zone: DMA Feb 9 09:42:19.740832 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 09:42:19.740838 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 09:42:19.740844 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 09:42:19.740851 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 09:42:19.740857 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 09:42:19.740863 kernel: Memory: 2459152K/2572288K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 113136K reserved, 0K cma-reserved) Feb 9 09:42:19.740870 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 9 09:42:19.740876 kernel: trace event string verifier disabled Feb 9 09:42:19.740882 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 09:42:19.740889 kernel: rcu: RCU event tracing is enabled. Feb 9 09:42:19.740895 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 9 09:42:19.740901 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 09:42:19.740907 kernel: Tracing variant of Tasks RCU enabled. Feb 9 09:42:19.740913 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 09:42:19.740919 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 9 09:42:19.740925 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 09:42:19.740931 kernel: GICv3: 256 SPIs implemented Feb 9 09:42:19.740938 kernel: GICv3: 0 Extended SPIs implemented Feb 9 09:42:19.740944 kernel: GICv3: Distributor has no Range Selector support Feb 9 09:42:19.740950 kernel: Root IRQ handler: gic_handle_irq Feb 9 09:42:19.740956 kernel: GICv3: 16 PPIs implemented Feb 9 09:42:19.740962 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 9 09:42:19.740968 kernel: ACPI: SRAT not present Feb 9 09:42:19.740974 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 9 09:42:19.740980 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Feb 9 09:42:19.740986 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Feb 9 09:42:19.740992 kernel: GICv3: using LPI property table @0x00000000400d0000 Feb 9 09:42:19.740998 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Feb 9 09:42:19.741004 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:42:19.741012 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 9 09:42:19.741018 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 9 09:42:19.741024 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 9 09:42:19.741030 kernel: arm-pv: using stolen time PV Feb 9 09:42:19.741037 kernel: Console: colour dummy device 80x25 Feb 9 09:42:19.741043 kernel: ACPI: Core revision 20210730 Feb 9 09:42:19.741049 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 9 09:42:19.741056 kernel: pid_max: default: 32768 minimum: 301 Feb 9 09:42:19.741062 kernel: LSM: Security Framework initializing Feb 9 09:42:19.741068 kernel: SELinux: Initializing. Feb 9 09:42:19.741076 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 09:42:19.741082 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 09:42:19.741088 kernel: rcu: Hierarchical SRCU implementation. Feb 9 09:42:19.741094 kernel: Platform MSI: ITS@0x8080000 domain created Feb 9 09:42:19.741100 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 9 09:42:19.741107 kernel: Remapping and enabling EFI services. Feb 9 09:42:19.741113 kernel: smp: Bringing up secondary CPUs ... Feb 9 09:42:19.741119 kernel: Detected PIPT I-cache on CPU1 Feb 9 09:42:19.741125 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 9 09:42:19.741133 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Feb 9 09:42:19.741139 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:42:19.741145 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 9 09:42:19.741151 kernel: Detected PIPT I-cache on CPU2 Feb 9 09:42:19.741158 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 9 09:42:19.741164 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Feb 9 09:42:19.741170 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:42:19.741176 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 9 09:42:19.741183 kernel: Detected PIPT I-cache on CPU3 Feb 9 09:42:19.741189 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 9 09:42:19.741196 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Feb 9 09:42:19.741203 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:42:19.741209 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 9 09:42:19.741215 kernel: smp: Brought up 1 node, 4 CPUs Feb 9 09:42:19.741235 kernel: SMP: Total of 4 processors activated. Feb 9 09:42:19.741243 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 09:42:19.741249 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 9 09:42:19.741256 kernel: CPU features: detected: Common not Private translations Feb 9 09:42:19.741262 kernel: CPU features: detected: CRC32 instructions Feb 9 09:42:19.741269 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 9 09:42:19.741275 kernel: CPU features: detected: LSE atomic instructions Feb 9 09:42:19.741282 kernel: CPU features: detected: Privileged Access Never Feb 9 09:42:19.741290 kernel: CPU features: detected: RAS Extension Support Feb 9 09:42:19.741305 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 9 09:42:19.741312 kernel: CPU: All CPU(s) started at EL1 Feb 9 09:42:19.741318 kernel: alternatives: patching kernel code Feb 9 09:42:19.741326 kernel: devtmpfs: initialized Feb 9 09:42:19.741333 kernel: KASLR enabled Feb 9 09:42:19.741339 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 09:42:19.741346 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 9 09:42:19.741353 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 09:42:19.741359 kernel: SMBIOS 3.0.0 present. Feb 9 09:42:19.741366 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Feb 9 09:42:19.741372 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 09:42:19.741379 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 09:42:19.741385 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 09:42:19.741393 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 09:42:19.741400 kernel: audit: initializing netlink subsys (disabled) Feb 9 09:42:19.741407 kernel: audit: type=2000 audit(0.037:1): state=initialized audit_enabled=0 res=1 Feb 9 09:42:19.741413 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 09:42:19.741420 kernel: cpuidle: using governor menu Feb 9 09:42:19.741426 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 09:42:19.741433 kernel: ASID allocator initialised with 32768 entries Feb 9 09:42:19.741439 kernel: ACPI: bus type PCI registered Feb 9 09:42:19.741446 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 09:42:19.741454 kernel: Serial: AMBA PL011 UART driver Feb 9 09:42:19.741460 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 09:42:19.741467 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 09:42:19.741473 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 09:42:19.741480 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 09:42:19.741486 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 09:42:19.741493 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 09:42:19.741499 kernel: ACPI: Added _OSI(Module Device) Feb 9 09:42:19.741506 kernel: ACPI: Added _OSI(Processor Device) Feb 9 09:42:19.741514 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 09:42:19.741520 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 09:42:19.741527 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 09:42:19.741534 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 09:42:19.741540 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 09:42:19.741546 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 09:42:19.741553 kernel: ACPI: Interpreter enabled Feb 9 09:42:19.741559 kernel: ACPI: Using GIC for interrupt routing Feb 9 09:42:19.741566 kernel: ACPI: MCFG table detected, 1 entries Feb 9 09:42:19.741574 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 9 09:42:19.741580 kernel: printk: console [ttyAMA0] enabled Feb 9 09:42:19.741587 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 09:42:19.741721 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 09:42:19.741785 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 9 09:42:19.741844 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 9 09:42:19.741900 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 9 09:42:19.741959 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 9 09:42:19.741968 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 9 09:42:19.741975 kernel: PCI host bridge to bus 0000:00 Feb 9 09:42:19.742040 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 9 09:42:19.742867 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 9 09:42:19.742960 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 9 09:42:19.743021 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 09:42:19.743104 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 9 09:42:19.743180 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 09:42:19.743255 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 9 09:42:19.743334 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 9 09:42:19.743397 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 9 09:42:19.743457 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 9 09:42:19.743516 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 9 09:42:19.743580 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 9 09:42:19.743634 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 9 09:42:19.743690 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 9 09:42:19.743744 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 9 09:42:19.743753 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 9 09:42:19.743760 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 9 09:42:19.743766 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 9 09:42:19.743809 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 9 09:42:19.743817 kernel: iommu: Default domain type: Translated Feb 9 09:42:19.743824 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 09:42:19.743831 kernel: vgaarb: loaded Feb 9 09:42:19.743838 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 09:42:19.743845 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 09:42:19.743852 kernel: PTP clock support registered Feb 9 09:42:19.743858 kernel: Registered efivars operations Feb 9 09:42:19.743865 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 09:42:19.743871 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 09:42:19.743880 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 09:42:19.743887 kernel: pnp: PnP ACPI init Feb 9 09:42:19.743962 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 9 09:42:19.743972 kernel: pnp: PnP ACPI: found 1 devices Feb 9 09:42:19.743979 kernel: NET: Registered PF_INET protocol family Feb 9 09:42:19.743986 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 09:42:19.743992 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 09:42:19.743999 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 09:42:19.744008 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 09:42:19.744015 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 09:42:19.744022 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 09:42:19.744028 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 09:42:19.744035 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 09:42:19.744042 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 09:42:19.744048 kernel: PCI: CLS 0 bytes, default 64 Feb 9 09:42:19.744055 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 9 09:42:19.744063 kernel: kvm [1]: HYP mode not available Feb 9 09:42:19.744070 kernel: Initialise system trusted keyrings Feb 9 09:42:19.744076 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 09:42:19.744083 kernel: Key type asymmetric registered Feb 9 09:42:19.744090 kernel: Asymmetric key parser 'x509' registered Feb 9 09:42:19.744096 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 09:42:19.744103 kernel: io scheduler mq-deadline registered Feb 9 09:42:19.744110 kernel: io scheduler kyber registered Feb 9 09:42:19.744116 kernel: io scheduler bfq registered Feb 9 09:42:19.744123 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 9 09:42:19.744131 kernel: ACPI: button: Power Button [PWRB] Feb 9 09:42:19.744138 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 9 09:42:19.744199 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 9 09:42:19.744208 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 09:42:19.744214 kernel: thunder_xcv, ver 1.0 Feb 9 09:42:19.744229 kernel: thunder_bgx, ver 1.0 Feb 9 09:42:19.744236 kernel: nicpf, ver 1.0 Feb 9 09:42:19.744242 kernel: nicvf, ver 1.0 Feb 9 09:42:19.744332 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 09:42:19.744394 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T09:42:19 UTC (1707471739) Feb 9 09:42:19.744404 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 09:42:19.744410 kernel: NET: Registered PF_INET6 protocol family Feb 9 09:42:19.744417 kernel: Segment Routing with IPv6 Feb 9 09:42:19.744424 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 09:42:19.744430 kernel: NET: Registered PF_PACKET protocol family Feb 9 09:42:19.744437 kernel: Key type dns_resolver registered Feb 9 09:42:19.744444 kernel: registered taskstats version 1 Feb 9 09:42:19.744452 kernel: Loading compiled-in X.509 certificates Feb 9 09:42:19.744459 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: ca91574208414224935c9cea513398977daf917d' Feb 9 09:42:19.744466 kernel: Key type .fscrypt registered Feb 9 09:42:19.744472 kernel: Key type fscrypt-provisioning registered Feb 9 09:42:19.744479 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 09:42:19.744486 kernel: ima: Allocated hash algorithm: sha1 Feb 9 09:42:19.744492 kernel: ima: No architecture policies found Feb 9 09:42:19.744499 kernel: Freeing unused kernel memory: 34688K Feb 9 09:42:19.744506 kernel: Run /init as init process Feb 9 09:42:19.744514 kernel: with arguments: Feb 9 09:42:19.744520 kernel: /init Feb 9 09:42:19.744526 kernel: with environment: Feb 9 09:42:19.744533 kernel: HOME=/ Feb 9 09:42:19.744539 kernel: TERM=linux Feb 9 09:42:19.744546 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 09:42:19.744555 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:42:19.744563 systemd[1]: Detected virtualization kvm. Feb 9 09:42:19.744572 systemd[1]: Detected architecture arm64. Feb 9 09:42:19.744579 systemd[1]: Running in initrd. Feb 9 09:42:19.744586 systemd[1]: No hostname configured, using default hostname. Feb 9 09:42:19.744593 systemd[1]: Hostname set to . Feb 9 09:42:19.744601 systemd[1]: Initializing machine ID from VM UUID. Feb 9 09:42:19.744608 systemd[1]: Queued start job for default target initrd.target. Feb 9 09:42:19.744615 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:42:19.744622 systemd[1]: Reached target cryptsetup.target. Feb 9 09:42:19.744631 systemd[1]: Reached target paths.target. Feb 9 09:42:19.744638 systemd[1]: Reached target slices.target. Feb 9 09:42:19.744645 systemd[1]: Reached target swap.target. Feb 9 09:42:19.744652 systemd[1]: Reached target timers.target. Feb 9 09:42:19.744659 systemd[1]: Listening on iscsid.socket. Feb 9 09:42:19.744666 systemd[1]: Listening on iscsiuio.socket. Feb 9 09:42:19.744673 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 09:42:19.744682 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 09:42:19.744689 systemd[1]: Listening on systemd-journald.socket. Feb 9 09:42:19.744696 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:42:19.744703 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:42:19.744710 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:42:19.744718 systemd[1]: Reached target sockets.target. Feb 9 09:42:19.744725 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:42:19.744732 systemd[1]: Finished network-cleanup.service. Feb 9 09:42:19.744739 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 09:42:19.744747 systemd[1]: Starting systemd-journald.service... Feb 9 09:42:19.744754 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:42:19.744761 systemd[1]: Starting systemd-resolved.service... Feb 9 09:42:19.744768 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 09:42:19.744775 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:42:19.744782 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 09:42:19.744790 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 09:42:19.744797 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 09:42:19.744804 kernel: audit: type=1130 audit(1707471739.739:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:19.744812 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 09:42:19.744824 systemd-journald[290]: Journal started Feb 9 09:42:19.744867 systemd-journald[290]: Runtime Journal (/run/log/journal/9138ce7794474e09bfd7a4c4dce399ee) is 6.0M, max 48.7M, 42.6M free. Feb 9 09:42:19.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:19.732895 systemd-modules-load[291]: Inserted module 'overlay' Feb 9 09:42:19.749316 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 09:42:19.749353 systemd[1]: Started systemd-journald.service. Feb 9 09:42:19.753203 kernel: audit: type=1130 audit(1707471739.749:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:19.753243 kernel: Bridge firewalling registered Feb 9 09:42:19.753253 kernel: audit: type=1130 audit(1707471739.752:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:19.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:19.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:19.750322 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 09:42:19.753315 systemd-modules-load[291]: Inserted module 'br_netfilter' Feb 9 09:42:19.756254 systemd-resolved[292]: Positive Trust Anchors: Feb 9 09:42:19.756262 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:42:19.756290 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:42:19.760595 systemd-resolved[292]: Defaulting to hostname 'linux'. Feb 9 09:42:19.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:19.765332 systemd[1]: Started systemd-resolved.service. Feb 9 09:42:19.769443 kernel: audit: type=1130 audit(1707471739.765:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:19.769464 kernel: SCSI subsystem initialized Feb 9 09:42:19.766142 systemd[1]: Reached target nss-lookup.target. Feb 9 09:42:19.770513 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 09:42:19.777141 kernel: audit: type=1130 audit(1707471739.770:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:19.777162 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 09:42:19.777178 kernel: device-mapper: uevent: version 1.0.3 Feb 9 09:42:19.777187 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 09:42:19.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:19.772075 systemd[1]: Starting dracut-cmdline.service... Feb 9 09:42:19.777257 systemd-modules-load[291]: Inserted module 'dm_multipath' Feb 9 09:42:19.778578 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:42:19.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:19.782321 kernel: audit: type=1130 audit(1707471739.779:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:19.780631 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:42:19.787801 dracut-cmdline[308]: dracut-dracut-053 Feb 9 09:42:19.788674 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:42:19.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:19.791820 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 09:42:19.796021 kernel: audit: type=1130 audit(1707471739.789:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:19.853352 kernel: Loading iSCSI transport class v2.0-870. Feb 9 09:42:19.861339 kernel: iscsi: registered transport (tcp) Feb 9 09:42:19.874534 kernel: iscsi: registered transport (qla4xxx) Feb 9 09:42:19.874557 kernel: QLogic iSCSI HBA Driver Feb 9 09:42:19.907121 systemd[1]: Finished dracut-cmdline.service. Feb 9 09:42:19.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:19.910109 systemd[1]: Starting dracut-pre-udev.service... Feb 9 09:42:19.913265 kernel: audit: type=1130 audit(1707471739.907:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:19.961356 kernel: raid6: neonx8 gen() 13631 MB/s Feb 9 09:42:19.978332 kernel: raid6: neonx8 xor() 10705 MB/s Feb 9 09:42:19.995334 kernel: raid6: neonx4 gen() 13385 MB/s Feb 9 09:42:20.012337 kernel: raid6: neonx4 xor() 11196 MB/s Feb 9 09:42:20.029324 kernel: raid6: neonx2 gen() 12899 MB/s Feb 9 09:42:20.046336 kernel: raid6: neonx2 xor() 10147 MB/s Feb 9 09:42:20.063346 kernel: raid6: neonx1 gen() 10353 MB/s Feb 9 09:42:20.080352 kernel: raid6: neonx1 xor() 8690 MB/s Feb 9 09:42:20.097315 kernel: raid6: int64x8 gen() 6281 MB/s Feb 9 09:42:20.114314 kernel: raid6: int64x8 xor() 3539 MB/s Feb 9 09:42:20.131319 kernel: raid6: int64x4 gen() 7234 MB/s Feb 9 09:42:20.148320 kernel: raid6: int64x4 xor() 3847 MB/s Feb 9 09:42:20.165320 kernel: raid6: int64x2 gen() 6143 MB/s Feb 9 09:42:20.182320 kernel: raid6: int64x2 xor() 3317 MB/s Feb 9 09:42:20.199320 kernel: raid6: int64x1 gen() 5040 MB/s Feb 9 09:42:20.216501 kernel: raid6: int64x1 xor() 2642 MB/s Feb 9 09:42:20.216521 kernel: raid6: using algorithm neonx8 gen() 13631 MB/s Feb 9 09:42:20.216530 kernel: raid6: .... xor() 10705 MB/s, rmw enabled Feb 9 09:42:20.216538 kernel: raid6: using neon recovery algorithm Feb 9 09:42:20.227546 kernel: xor: measuring software checksum speed Feb 9 09:42:20.227573 kernel: 8regs : 17308 MB/sec Feb 9 09:42:20.228377 kernel: 32regs : 20755 MB/sec Feb 9 09:42:20.229533 kernel: arm64_neon : 27949 MB/sec Feb 9 09:42:20.229544 kernel: xor: using function: arm64_neon (27949 MB/sec) Feb 9 09:42:20.282324 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 09:42:20.292538 systemd[1]: Finished dracut-pre-udev.service. Feb 9 09:42:20.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:20.295000 audit: BPF prog-id=7 op=LOAD Feb 9 09:42:20.295000 audit: BPF prog-id=8 op=LOAD Feb 9 09:42:20.295924 systemd[1]: Starting systemd-udevd.service... Feb 9 09:42:20.297073 kernel: audit: type=1130 audit(1707471740.293:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:20.309651 systemd-udevd[490]: Using default interface naming scheme 'v252'. Feb 9 09:42:20.313105 systemd[1]: Started systemd-udevd.service. Feb 9 09:42:20.314744 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 09:42:20.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:20.333312 dracut-pre-trigger[493]: rd.md=0: removing MD RAID activation Feb 9 09:42:20.337204 kernel: hrtimer: interrupt took 7976200 ns Feb 9 09:42:20.359272 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 09:42:20.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:20.360768 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:42:20.396177 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:42:20.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:20.429316 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 9 09:42:20.431321 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 09:42:20.431351 kernel: GPT:9289727 != 19775487 Feb 9 09:42:20.431361 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 09:42:20.431369 kernel: GPT:9289727 != 19775487 Feb 9 09:42:20.431376 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 09:42:20.431385 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 09:42:20.445448 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 09:42:20.447319 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (555) Feb 9 09:42:20.450888 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 09:42:20.451878 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 09:42:20.457652 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 09:42:20.462781 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 09:42:20.465165 systemd[1]: Starting disk-uuid.service... Feb 9 09:42:20.549497 disk-uuid[564]: Primary Header is updated. Feb 9 09:42:20.549497 disk-uuid[564]: Secondary Entries is updated. Feb 9 09:42:20.549497 disk-uuid[564]: Secondary Header is updated. Feb 9 09:42:20.552599 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 09:42:21.562843 disk-uuid[565]: The operation has completed successfully. Feb 9 09:42:21.564356 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 09:42:21.584667 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 09:42:21.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:21.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:21.584763 systemd[1]: Finished disk-uuid.service. Feb 9 09:42:21.588836 systemd[1]: Starting verity-setup.service... Feb 9 09:42:21.602323 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 09:42:21.624509 systemd[1]: Found device dev-mapper-usr.device. Feb 9 09:42:21.627198 systemd[1]: Mounting sysusr-usr.mount... Feb 9 09:42:21.629467 systemd[1]: Finished verity-setup.service. Feb 9 09:42:21.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:21.676976 systemd[1]: Mounted sysusr-usr.mount. Feb 9 09:42:21.678283 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 09:42:21.677837 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 09:42:21.678567 systemd[1]: Starting ignition-setup.service... Feb 9 09:42:21.680642 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 09:42:21.686530 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:42:21.686563 kernel: BTRFS info (device vda6): using free space tree Feb 9 09:42:21.686573 kernel: BTRFS info (device vda6): has skinny extents Feb 9 09:42:21.694854 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 09:42:21.701854 systemd[1]: Finished ignition-setup.service. Feb 9 09:42:21.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:21.703389 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 09:42:21.764028 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 09:42:21.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:21.765000 audit: BPF prog-id=9 op=LOAD Feb 9 09:42:21.766483 systemd[1]: Starting systemd-networkd.service... Feb 9 09:42:21.783265 ignition[650]: Ignition 2.14.0 Feb 9 09:42:21.783276 ignition[650]: Stage: fetch-offline Feb 9 09:42:21.783336 ignition[650]: no configs at "/usr/lib/ignition/base.d" Feb 9 09:42:21.783346 ignition[650]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:42:21.783510 ignition[650]: parsed url from cmdline: "" Feb 9 09:42:21.783513 ignition[650]: no config URL provided Feb 9 09:42:21.783518 ignition[650]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 09:42:21.783526 ignition[650]: no config at "/usr/lib/ignition/user.ign" Feb 9 09:42:21.783544 ignition[650]: op(1): [started] loading QEMU firmware config module Feb 9 09:42:21.783549 ignition[650]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 9 09:42:21.794045 ignition[650]: op(1): [finished] loading QEMU firmware config module Feb 9 09:42:21.802545 systemd-networkd[742]: lo: Link UP Feb 9 09:42:21.802555 systemd-networkd[742]: lo: Gained carrier Feb 9 09:42:21.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:21.803124 systemd-networkd[742]: Enumeration completed Feb 9 09:42:21.803218 systemd[1]: Started systemd-networkd.service. Feb 9 09:42:21.803513 systemd-networkd[742]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:42:21.804159 systemd[1]: Reached target network.target. Feb 9 09:42:21.804843 systemd-networkd[742]: eth0: Link UP Feb 9 09:42:21.804846 systemd-networkd[742]: eth0: Gained carrier Feb 9 09:42:21.806331 systemd[1]: Starting iscsiuio.service... Feb 9 09:42:21.814942 systemd[1]: Started iscsiuio.service. Feb 9 09:42:21.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:21.816406 systemd[1]: Starting iscsid.service... Feb 9 09:42:21.819795 iscsid[749]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:42:21.819795 iscsid[749]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 09:42:21.819795 iscsid[749]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 09:42:21.819795 iscsid[749]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 09:42:21.819795 iscsid[749]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:42:21.819795 iscsid[749]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 09:42:21.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:21.822585 systemd[1]: Started iscsid.service. Feb 9 09:42:21.827701 systemd[1]: Starting dracut-initqueue.service... Feb 9 09:42:21.832695 systemd-networkd[742]: eth0: DHCPv4 address 10.0.0.11/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 09:42:21.837903 systemd[1]: Finished dracut-initqueue.service. Feb 9 09:42:21.838915 systemd[1]: Reached target remote-fs-pre.target. Feb 9 09:42:21.840242 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:42:21.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:21.841798 systemd[1]: Reached target remote-fs.target. Feb 9 09:42:21.843891 systemd[1]: Starting dracut-pre-mount.service... Feb 9 09:42:21.851485 systemd[1]: Finished dracut-pre-mount.service. Feb 9 09:42:21.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:21.891604 ignition[650]: parsing config with SHA512: 2db15af74d63f3667090a347f0fc6332f8760fc441edef03095833b8d1dc6509b31bc728083a152b76da0b990a29377fa9e25bc3a72bf7892a13d7ca4dbab0f5 Feb 9 09:42:21.936830 unknown[650]: fetched base config from "system" Feb 9 09:42:21.936845 unknown[650]: fetched user config from "qemu" Feb 9 09:42:21.937629 ignition[650]: fetch-offline: fetch-offline passed Feb 9 09:42:21.937693 ignition[650]: Ignition finished successfully Feb 9 09:42:21.939218 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 09:42:21.940046 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 09:42:21.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:21.940913 systemd[1]: Starting ignition-kargs.service... Feb 9 09:42:21.950383 ignition[764]: Ignition 2.14.0 Feb 9 09:42:21.950393 ignition[764]: Stage: kargs Feb 9 09:42:21.950488 ignition[764]: no configs at "/usr/lib/ignition/base.d" Feb 9 09:42:21.950498 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:42:21.952803 systemd[1]: Finished ignition-kargs.service. Feb 9 09:42:21.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:21.951610 ignition[764]: kargs: kargs passed Feb 9 09:42:21.951657 ignition[764]: Ignition finished successfully Feb 9 09:42:21.954930 systemd[1]: Starting ignition-disks.service... Feb 9 09:42:21.961285 ignition[770]: Ignition 2.14.0 Feb 9 09:42:21.961322 ignition[770]: Stage: disks Feb 9 09:42:21.961426 ignition[770]: no configs at "/usr/lib/ignition/base.d" Feb 9 09:42:21.961436 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:42:21.964121 systemd[1]: Finished ignition-disks.service. Feb 9 09:42:21.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:21.962661 ignition[770]: disks: disks passed Feb 9 09:42:21.965615 systemd[1]: Reached target initrd-root-device.target. Feb 9 09:42:21.962707 ignition[770]: Ignition finished successfully Feb 9 09:42:21.966715 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:42:21.967770 systemd[1]: Reached target local-fs.target. Feb 9 09:42:21.969096 systemd[1]: Reached target sysinit.target. Feb 9 09:42:21.970234 systemd[1]: Reached target basic.target. Feb 9 09:42:21.972180 systemd[1]: Starting systemd-fsck-root.service... Feb 9 09:42:21.983148 systemd-fsck[778]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 9 09:42:21.987459 systemd[1]: Finished systemd-fsck-root.service. Feb 9 09:42:21.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:21.989244 systemd[1]: Mounting sysroot.mount... Feb 9 09:42:22.000327 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 09:42:22.000392 systemd[1]: Mounted sysroot.mount. Feb 9 09:42:22.001145 systemd[1]: Reached target initrd-root-fs.target. Feb 9 09:42:22.003193 systemd[1]: Mounting sysroot-usr.mount... Feb 9 09:42:22.004021 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 09:42:22.004061 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 09:42:22.004084 systemd[1]: Reached target ignition-diskful.target. Feb 9 09:42:22.006194 systemd[1]: Mounted sysroot-usr.mount. Feb 9 09:42:22.008793 systemd[1]: Starting initrd-setup-root.service... Feb 9 09:42:22.014233 initrd-setup-root[788]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 09:42:22.018250 initrd-setup-root[796]: cut: /sysroot/etc/group: No such file or directory Feb 9 09:42:22.023022 initrd-setup-root[804]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 09:42:22.028137 initrd-setup-root[812]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 09:42:22.065778 systemd[1]: Finished initrd-setup-root.service. Feb 9 09:42:22.067477 systemd[1]: Starting ignition-mount.service... Feb 9 09:42:22.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:22.068720 systemd[1]: Starting sysroot-boot.service... Feb 9 09:42:22.073556 bash[829]: umount: /sysroot/usr/share/oem: not mounted. Feb 9 09:42:22.082390 ignition[831]: INFO : Ignition 2.14.0 Feb 9 09:42:22.082390 ignition[831]: INFO : Stage: mount Feb 9 09:42:22.083663 ignition[831]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 09:42:22.083663 ignition[831]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:42:22.085243 ignition[831]: INFO : mount: mount passed Feb 9 09:42:22.085243 ignition[831]: INFO : Ignition finished successfully Feb 9 09:42:22.084939 systemd[1]: Finished ignition-mount.service. Feb 9 09:42:22.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:22.093850 systemd[1]: Finished sysroot-boot.service. Feb 9 09:42:22.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:22.636244 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 09:42:22.642334 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (839) Feb 9 09:42:22.643530 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:42:22.643550 kernel: BTRFS info (device vda6): using free space tree Feb 9 09:42:22.643559 kernel: BTRFS info (device vda6): has skinny extents Feb 9 09:42:22.646964 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 09:42:22.648524 systemd[1]: Starting ignition-files.service... Feb 9 09:42:22.661858 ignition[859]: INFO : Ignition 2.14.0 Feb 9 09:42:22.661858 ignition[859]: INFO : Stage: files Feb 9 09:42:22.663053 ignition[859]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 09:42:22.663053 ignition[859]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:42:22.663053 ignition[859]: DEBUG : files: compiled without relabeling support, skipping Feb 9 09:42:22.667651 ignition[859]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 09:42:22.667651 ignition[859]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 09:42:22.670382 ignition[859]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 09:42:22.671404 ignition[859]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 09:42:22.671404 ignition[859]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 09:42:22.671199 unknown[859]: wrote ssh authorized keys file for user: core Feb 9 09:42:22.674138 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 09:42:22.674138 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 9 09:42:22.722395 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 09:42:22.776854 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 09:42:22.776854 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 09:42:22.779851 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Feb 9 09:42:23.110485 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 09:42:23.297971 ignition[859]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Feb 9 09:42:23.297971 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 09:42:23.301428 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 09:42:23.301428 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Feb 9 09:42:23.438857 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 09:42:23.448536 systemd-networkd[742]: eth0: Gained IPv6LL Feb 9 09:42:23.557909 ignition[859]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Feb 9 09:42:23.557909 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 09:42:23.561454 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 09:42:23.561454 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 09:42:23.561454 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:42:23.561454 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Feb 9 09:42:23.655653 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 09:42:26.311057 ignition[859]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Feb 9 09:42:26.313269 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:42:26.314547 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 09:42:26.314547 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubectl: attempt #1 Feb 9 09:42:26.335476 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 09:42:26.595952 ignition[859]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 3672fda0beebbbd636a2088f427463cbad32683ea4fbb1df61650552e63846b6a47db803ccb70c3db0a8f24746a23a5632bdc15a3fb78f4f7d833e7f86763c2a Feb 9 09:42:26.598055 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 09:42:26.598055 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:42:26.598055 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Feb 9 09:42:26.621066 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 9 09:42:33.403637 ignition[859]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Feb 9 09:42:33.406104 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:42:33.406104 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:42:33.406104 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:42:33.406104 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 9 09:42:33.406104 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 09:42:33.406104 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 09:42:33.406104 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 09:42:33.406104 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 09:42:33.406104 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 09:42:33.406104 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 09:42:33.406104 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 09:42:33.406104 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:42:33.406104 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:42:33.406104 ignition[859]: INFO : files: op(10): [started] processing unit "containerd.service" Feb 9 09:42:33.406104 ignition[859]: INFO : files: op(10): op(11): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 09:42:33.406104 ignition[859]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 09:42:33.406104 ignition[859]: INFO : files: op(10): [finished] processing unit "containerd.service" Feb 9 09:42:33.431566 ignition[859]: INFO : files: op(12): [started] processing unit "prepare-cni-plugins.service" Feb 9 09:42:33.431566 ignition[859]: INFO : files: op(12): op(13): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:42:33.431566 ignition[859]: INFO : files: op(12): op(13): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:42:33.431566 ignition[859]: INFO : files: op(12): [finished] processing unit "prepare-cni-plugins.service" Feb 9 09:42:33.431566 ignition[859]: INFO : files: op(14): [started] processing unit "prepare-critools.service" Feb 9 09:42:33.431566 ignition[859]: INFO : files: op(14): op(15): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:42:33.431566 ignition[859]: INFO : files: op(14): op(15): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:42:33.431566 ignition[859]: INFO : files: op(14): [finished] processing unit "prepare-critools.service" Feb 9 09:42:33.431566 ignition[859]: INFO : files: op(16): [started] processing unit "prepare-helm.service" Feb 9 09:42:33.431566 ignition[859]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 09:42:33.431566 ignition[859]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 09:42:33.431566 ignition[859]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" Feb 9 09:42:33.431566 ignition[859]: INFO : files: op(18): [started] processing unit "coreos-metadata.service" Feb 9 09:42:33.431566 ignition[859]: INFO : files: op(18): op(19): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 09:42:33.431566 ignition[859]: INFO : files: op(18): op(19): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 09:42:33.431566 ignition[859]: INFO : files: op(18): [finished] processing unit "coreos-metadata.service" Feb 9 09:42:33.431566 ignition[859]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:42:33.431566 ignition[859]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:42:33.454707 ignition[859]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-critools.service" Feb 9 09:42:33.454707 ignition[859]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 09:42:33.454707 ignition[859]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-helm.service" Feb 9 09:42:33.454707 ignition[859]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 09:42:33.454707 ignition[859]: INFO : files: op(1d): [started] setting preset to disabled for "coreos-metadata.service" Feb 9 09:42:33.454707 ignition[859]: INFO : files: op(1d): op(1e): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 09:42:33.454707 ignition[859]: INFO : files: op(1d): op(1e): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 09:42:33.454707 ignition[859]: INFO : files: op(1d): [finished] setting preset to disabled for "coreos-metadata.service" Feb 9 09:42:33.454707 ignition[859]: INFO : files: createResultFile: createFiles: op(1f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:42:33.454707 ignition[859]: INFO : files: createResultFile: createFiles: op(1f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:42:33.454707 ignition[859]: INFO : files: files passed Feb 9 09:42:33.454707 ignition[859]: INFO : Ignition finished successfully Feb 9 09:42:33.482052 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 9 09:42:33.482074 kernel: audit: type=1130 audit(1707471753.455:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.482085 kernel: audit: type=1130 audit(1707471753.464:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.482094 kernel: audit: type=1131 audit(1707471753.464:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.482103 kernel: audit: type=1130 audit(1707471753.470:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.454217 systemd[1]: Finished ignition-files.service. Feb 9 09:42:33.456130 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 09:42:33.460104 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 09:42:33.485220 initrd-setup-root-after-ignition[885]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 9 09:42:33.460776 systemd[1]: Starting ignition-quench.service... Feb 9 09:42:33.486905 initrd-setup-root-after-ignition[887]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 09:42:33.463641 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 09:42:33.463718 systemd[1]: Finished ignition-quench.service. Feb 9 09:42:33.465546 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 09:42:33.470893 systemd[1]: Reached target ignition-complete.target. Feb 9 09:42:33.477853 systemd[1]: Starting initrd-parse-etc.service... Feb 9 09:42:33.495201 kernel: audit: type=1130 audit(1707471753.491:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.495222 kernel: audit: type=1131 audit(1707471753.491:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.490212 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 09:42:33.490295 systemd[1]: Finished initrd-parse-etc.service. Feb 9 09:42:33.491464 systemd[1]: Reached target initrd-fs.target. Feb 9 09:42:33.495950 systemd[1]: Reached target initrd.target. Feb 9 09:42:33.497194 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 09:42:33.497909 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 09:42:33.507806 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 09:42:33.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.509345 systemd[1]: Starting initrd-cleanup.service... Feb 9 09:42:33.511782 kernel: audit: type=1130 audit(1707471753.508:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.516982 systemd[1]: Stopped target nss-lookup.target. Feb 9 09:42:33.517670 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 09:42:33.518698 systemd[1]: Stopped target timers.target. Feb 9 09:42:33.519668 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 09:42:33.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.519769 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 09:42:33.523788 kernel: audit: type=1131 audit(1707471753.520:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.520701 systemd[1]: Stopped target initrd.target. Feb 9 09:42:33.523396 systemd[1]: Stopped target basic.target. Feb 9 09:42:33.524365 systemd[1]: Stopped target ignition-complete.target. Feb 9 09:42:33.525433 systemd[1]: Stopped target ignition-diskful.target. Feb 9 09:42:33.526420 systemd[1]: Stopped target initrd-root-device.target. Feb 9 09:42:33.527467 systemd[1]: Stopped target remote-fs.target. Feb 9 09:42:33.528506 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 09:42:33.529586 systemd[1]: Stopped target sysinit.target. Feb 9 09:42:33.530523 systemd[1]: Stopped target local-fs.target. Feb 9 09:42:33.531473 systemd[1]: Stopped target local-fs-pre.target. Feb 9 09:42:33.532409 systemd[1]: Stopped target swap.target. Feb 9 09:42:33.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.533286 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 09:42:33.533403 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 09:42:33.537708 kernel: audit: type=1131 audit(1707471753.534:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.534477 systemd[1]: Stopped target cryptsetup.target. Feb 9 09:42:33.538291 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 09:42:33.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.538418 systemd[1]: Stopped dracut-initqueue.service. Feb 9 09:42:33.542656 kernel: audit: type=1131 audit(1707471753.538:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.539361 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 09:42:33.539456 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 09:42:33.542337 systemd[1]: Stopped target paths.target. Feb 9 09:42:33.543180 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 09:42:33.547330 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 09:42:33.548074 systemd[1]: Stopped target slices.target. Feb 9 09:42:33.549156 systemd[1]: Stopped target sockets.target. Feb 9 09:42:33.550101 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 09:42:33.550178 systemd[1]: Closed iscsid.socket. Feb 9 09:42:33.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.551008 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 09:42:33.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.551104 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 09:42:33.552135 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 09:42:33.552233 systemd[1]: Stopped ignition-files.service. Feb 9 09:42:33.553895 systemd[1]: Stopping ignition-mount.service... Feb 9 09:42:33.554976 systemd[1]: Stopping iscsiuio.service... Feb 9 09:42:33.556510 systemd[1]: Stopping sysroot-boot.service... Feb 9 09:42:33.557375 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 09:42:33.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.557491 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 09:42:33.558498 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 09:42:33.558592 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 09:42:33.560820 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 09:42:33.560999 systemd[1]: Stopped iscsiuio.service. Feb 9 09:42:33.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.564951 ignition[900]: INFO : Ignition 2.14.0 Feb 9 09:42:33.564951 ignition[900]: INFO : Stage: umount Feb 9 09:42:33.564951 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 09:42:33.564951 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 09:42:33.564951 ignition[900]: INFO : umount: umount passed Feb 9 09:42:33.564951 ignition[900]: INFO : Ignition finished successfully Feb 9 09:42:33.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.564847 systemd[1]: Stopped target network.target. Feb 9 09:42:33.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.565725 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 09:42:33.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.565769 systemd[1]: Closed iscsiuio.socket. Feb 9 09:42:33.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.566824 systemd[1]: Stopping systemd-networkd.service... Feb 9 09:42:33.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.567804 systemd[1]: Stopping systemd-resolved.service... Feb 9 09:42:33.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.569698 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 09:42:33.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.570131 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 09:42:33.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.570222 systemd[1]: Finished initrd-cleanup.service. Feb 9 09:42:33.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.571245 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 09:42:33.571337 systemd[1]: Stopped ignition-mount.service. Feb 9 09:42:33.572330 systemd-networkd[742]: eth0: DHCPv6 lease lost Feb 9 09:42:33.572515 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 09:42:33.584000 audit: BPF prog-id=9 op=UNLOAD Feb 9 09:42:33.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.572580 systemd[1]: Stopped sysroot-boot.service. Feb 9 09:42:33.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.573557 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 09:42:33.573595 systemd[1]: Stopped ignition-disks.service. Feb 9 09:42:33.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.574798 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 09:42:33.574842 systemd[1]: Stopped ignition-kargs.service. Feb 9 09:42:33.590000 audit: BPF prog-id=6 op=UNLOAD Feb 9 09:42:33.575950 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 09:42:33.575988 systemd[1]: Stopped ignition-setup.service. Feb 9 09:42:33.577072 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 09:42:33.577110 systemd[1]: Stopped initrd-setup-root.service. Feb 9 09:42:33.578601 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 09:42:33.578695 systemd[1]: Stopped systemd-networkd.service. Feb 9 09:42:33.579975 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 09:42:33.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.580061 systemd[1]: Stopped systemd-resolved.service. Feb 9 09:42:33.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.581329 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 09:42:33.581367 systemd[1]: Closed systemd-networkd.socket. Feb 9 09:42:33.582743 systemd[1]: Stopping network-cleanup.service... Feb 9 09:42:33.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.583796 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 09:42:33.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.583857 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 09:42:33.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.585201 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 09:42:33.585245 systemd[1]: Stopped systemd-sysctl.service. Feb 9 09:42:33.586885 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 09:42:33.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.586926 systemd[1]: Stopped systemd-modules-load.service. Feb 9 09:42:33.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.588413 systemd[1]: Stopping systemd-udevd.service... Feb 9 09:42:33.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.590101 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 09:42:33.593697 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 09:42:33.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:33.593791 systemd[1]: Stopped network-cleanup.service. Feb 9 09:42:33.595085 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 09:42:33.595222 systemd[1]: Stopped systemd-udevd.service. Feb 9 09:42:33.596335 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 09:42:33.596371 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 09:42:33.597249 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 09:42:33.597285 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 09:42:33.598595 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 09:42:33.598640 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 09:42:33.599717 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 09:42:33.599756 systemd[1]: Stopped dracut-cmdline.service. Feb 9 09:42:33.600806 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 09:42:33.600845 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 09:42:33.619000 audit: BPF prog-id=5 op=UNLOAD Feb 9 09:42:33.619000 audit: BPF prog-id=4 op=UNLOAD Feb 9 09:42:33.619000 audit: BPF prog-id=3 op=UNLOAD Feb 9 09:42:33.602809 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 09:42:33.603916 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 09:42:33.603975 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 09:42:33.605795 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 09:42:33.621000 audit: BPF prog-id=8 op=UNLOAD Feb 9 09:42:33.621000 audit: BPF prog-id=7 op=UNLOAD Feb 9 09:42:33.605837 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 09:42:33.606656 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 09:42:33.606695 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 09:42:33.608691 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 9 09:42:33.609078 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 09:42:33.609163 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 09:42:33.610218 systemd[1]: Reached target initrd-switch-root.target. Feb 9 09:42:33.612123 systemd[1]: Starting initrd-switch-root.service... Feb 9 09:42:33.618872 systemd[1]: Switching root. Feb 9 09:42:33.639571 iscsid[749]: iscsid shutting down. Feb 9 09:42:33.640190 systemd-journald[290]: Journal stopped Feb 9 09:42:35.607825 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Feb 9 09:42:35.607884 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 09:42:35.607896 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 09:42:35.607907 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 09:42:35.607921 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 09:42:35.607934 kernel: SELinux: policy capability open_perms=1 Feb 9 09:42:35.607943 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 09:42:35.607953 kernel: SELinux: policy capability always_check_network=0 Feb 9 09:42:35.607963 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 09:42:35.607972 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 09:42:35.607982 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 09:42:35.607991 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 09:42:35.608002 systemd[1]: Successfully loaded SELinux policy in 37.759ms. Feb 9 09:42:35.608018 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.383ms. Feb 9 09:42:35.608033 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:42:35.608049 systemd[1]: Detected virtualization kvm. Feb 9 09:42:35.608060 systemd[1]: Detected architecture arm64. Feb 9 09:42:35.608070 systemd[1]: Detected first boot. Feb 9 09:42:35.608080 systemd[1]: Initializing machine ID from VM UUID. Feb 9 09:42:35.608091 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 09:42:35.608104 systemd[1]: Populated /etc with preset unit settings. Feb 9 09:42:35.608129 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:42:35.608141 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:42:35.608152 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:42:35.608175 systemd[1]: Queued start job for default target multi-user.target. Feb 9 09:42:35.608190 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 9 09:42:35.608201 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 09:42:35.608211 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 09:42:35.608223 systemd[1]: Created slice system-getty.slice. Feb 9 09:42:35.608238 systemd[1]: Created slice system-modprobe.slice. Feb 9 09:42:35.608248 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 09:42:35.608259 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 09:42:35.608269 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 09:42:35.608280 systemd[1]: Created slice user.slice. Feb 9 09:42:35.608290 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:42:35.608309 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 09:42:35.608320 systemd[1]: Set up automount boot.automount. Feb 9 09:42:35.608332 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 09:42:35.608342 systemd[1]: Reached target integritysetup.target. Feb 9 09:42:35.608352 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:42:35.608364 systemd[1]: Reached target remote-fs.target. Feb 9 09:42:35.608374 systemd[1]: Reached target slices.target. Feb 9 09:42:35.608384 systemd[1]: Reached target swap.target. Feb 9 09:42:35.608395 systemd[1]: Reached target torcx.target. Feb 9 09:42:35.608404 systemd[1]: Reached target veritysetup.target. Feb 9 09:42:35.608416 systemd[1]: Listening on systemd-coredump.socket. Feb 9 09:42:35.608430 systemd[1]: Listening on systemd-initctl.socket. Feb 9 09:42:35.608440 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 09:42:35.608451 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 09:42:35.608462 systemd[1]: Listening on systemd-journald.socket. Feb 9 09:42:35.608472 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:42:35.608483 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:42:35.608493 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:42:35.608503 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 09:42:35.608513 systemd[1]: Mounting dev-hugepages.mount... Feb 9 09:42:35.608525 systemd[1]: Mounting dev-mqueue.mount... Feb 9 09:42:35.608535 systemd[1]: Mounting media.mount... Feb 9 09:42:35.608545 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 09:42:35.608555 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 09:42:35.608567 systemd[1]: Mounting tmp.mount... Feb 9 09:42:35.608577 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 09:42:35.608588 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 09:42:35.608599 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:42:35.608608 systemd[1]: Starting modprobe@configfs.service... Feb 9 09:42:35.608620 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 09:42:35.608630 systemd[1]: Starting modprobe@drm.service... Feb 9 09:42:35.608640 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 09:42:35.608650 systemd[1]: Starting modprobe@fuse.service... Feb 9 09:42:35.608660 systemd[1]: Starting modprobe@loop.service... Feb 9 09:42:35.608671 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 09:42:35.608682 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 9 09:42:35.608693 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 9 09:42:35.608705 systemd[1]: Starting systemd-journald.service... Feb 9 09:42:35.608716 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:42:35.608727 systemd[1]: Starting systemd-network-generator.service... Feb 9 09:42:35.608737 systemd[1]: Starting systemd-remount-fs.service... Feb 9 09:42:35.608747 kernel: loop: module loaded Feb 9 09:42:35.608758 kernel: fuse: init (API version 7.34) Feb 9 09:42:35.608768 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:42:35.608779 systemd[1]: Mounted dev-hugepages.mount. Feb 9 09:42:35.608790 systemd[1]: Mounted dev-mqueue.mount. Feb 9 09:42:35.608800 systemd[1]: Mounted media.mount. Feb 9 09:42:35.608811 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 09:42:35.608821 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 09:42:35.608833 systemd[1]: Mounted tmp.mount. Feb 9 09:42:35.608843 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:42:35.608853 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 09:42:35.608863 systemd[1]: Finished modprobe@configfs.service. Feb 9 09:42:35.608876 systemd-journald[1026]: Journal started Feb 9 09:42:35.608915 systemd-journald[1026]: Runtime Journal (/run/log/journal/9138ce7794474e09bfd7a4c4dce399ee) is 6.0M, max 48.7M, 42.6M free. Feb 9 09:42:35.519000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 09:42:35.519000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 09:42:35.606000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 09:42:35.606000 audit[1026]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffede98be0 a2=4000 a3=1 items=0 ppid=1 pid=1026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:42:35.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.606000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 09:42:35.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.611603 systemd[1]: Started systemd-journald.service. Feb 9 09:42:35.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.611861 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 09:42:35.612025 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 09:42:35.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.613085 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 09:42:35.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.613625 systemd[1]: Finished modprobe@drm.service. Feb 9 09:42:35.614623 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 09:42:35.614803 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 09:42:35.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.615950 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 09:42:35.618185 systemd[1]: Finished modprobe@fuse.service. Feb 9 09:42:35.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.619401 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 09:42:35.619624 systemd[1]: Finished modprobe@loop.service. Feb 9 09:42:35.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.620973 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:42:35.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.622479 systemd[1]: Finished systemd-network-generator.service. Feb 9 09:42:35.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.623788 systemd[1]: Finished systemd-remount-fs.service. Feb 9 09:42:35.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.625107 systemd[1]: Reached target network-pre.target. Feb 9 09:42:35.627539 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 09:42:35.629292 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 09:42:35.630221 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 09:42:35.631836 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 09:42:35.633563 systemd[1]: Starting systemd-journal-flush.service... Feb 9 09:42:35.634186 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 09:42:35.635334 systemd[1]: Starting systemd-random-seed.service... Feb 9 09:42:35.636203 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 09:42:35.640278 systemd-journald[1026]: Time spent on flushing to /var/log/journal/9138ce7794474e09bfd7a4c4dce399ee is 15.751ms for 961 entries. Feb 9 09:42:35.640278 systemd-journald[1026]: System Journal (/var/log/journal/9138ce7794474e09bfd7a4c4dce399ee) is 8.0M, max 195.6M, 187.6M free. Feb 9 09:42:35.673392 systemd-journald[1026]: Received client request to flush runtime journal. Feb 9 09:42:35.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.637247 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:42:35.643867 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 09:42:35.645175 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 09:42:35.652469 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:42:35.654372 systemd[1]: Finished systemd-random-seed.service. Feb 9 09:42:35.655376 systemd[1]: Reached target first-boot-complete.target. Feb 9 09:42:35.664125 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 09:42:35.666335 systemd[1]: Starting systemd-sysusers.service... Feb 9 09:42:35.669449 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:42:35.671661 systemd[1]: Starting systemd-udev-settle.service... Feb 9 09:42:35.675523 systemd[1]: Finished systemd-journal-flush.service. Feb 9 09:42:35.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.678928 udevadm[1086]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 09:42:35.683702 systemd[1]: Finished systemd-sysusers.service. Feb 9 09:42:35.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:35.685718 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 09:42:35.701575 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 09:42:35.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:36.011392 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 09:42:36.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:36.013475 systemd[1]: Starting systemd-udevd.service... Feb 9 09:42:36.033616 systemd-udevd[1093]: Using default interface naming scheme 'v252'. Feb 9 09:42:36.046976 systemd[1]: Started systemd-udevd.service. Feb 9 09:42:36.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:36.049294 systemd[1]: Starting systemd-networkd.service... Feb 9 09:42:36.056500 systemd[1]: Starting systemd-userdbd.service... Feb 9 09:42:36.070362 systemd[1]: Found device dev-ttyAMA0.device. Feb 9 09:42:36.102064 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 09:42:36.121442 systemd[1]: Started systemd-userdbd.service. Feb 9 09:42:36.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:36.139758 systemd[1]: Finished systemd-udev-settle.service. Feb 9 09:42:36.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:36.141970 systemd[1]: Starting lvm2-activation-early.service... Feb 9 09:42:36.155853 lvm[1126]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:42:36.173500 systemd-networkd[1095]: lo: Link UP Feb 9 09:42:36.173779 systemd-networkd[1095]: lo: Gained carrier Feb 9 09:42:36.174211 systemd-networkd[1095]: Enumeration completed Feb 9 09:42:36.174408 systemd-networkd[1095]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:42:36.174426 systemd[1]: Started systemd-networkd.service. Feb 9 09:42:36.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:36.177330 systemd-networkd[1095]: eth0: Link UP Feb 9 09:42:36.177341 systemd-networkd[1095]: eth0: Gained carrier Feb 9 09:42:36.191316 systemd[1]: Finished lvm2-activation-early.service. Feb 9 09:42:36.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:36.192107 systemd[1]: Reached target cryptsetup.target. Feb 9 09:42:36.194151 systemd[1]: Starting lvm2-activation.service... Feb 9 09:42:36.197855 lvm[1129]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:42:36.199490 systemd-networkd[1095]: eth0: DHCPv4 address 10.0.0.11/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 09:42:36.229246 systemd[1]: Finished lvm2-activation.service. Feb 9 09:42:36.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:36.230041 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:42:36.230755 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 09:42:36.230783 systemd[1]: Reached target local-fs.target. Feb 9 09:42:36.231399 systemd[1]: Reached target machines.target. Feb 9 09:42:36.233281 systemd[1]: Starting ldconfig.service... Feb 9 09:42:36.234167 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 09:42:36.234218 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:42:36.235492 systemd[1]: Starting systemd-boot-update.service... Feb 9 09:42:36.237129 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 09:42:36.239521 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 09:42:36.240484 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:42:36.240555 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:42:36.241839 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 09:42:36.243158 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1132 (bootctl) Feb 9 09:42:36.244601 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 09:42:36.257340 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 09:42:36.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:36.272216 systemd-tmpfiles[1135]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 09:42:36.273904 systemd-tmpfiles[1135]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 09:42:36.277537 systemd-tmpfiles[1135]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 09:42:36.386034 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 09:42:36.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:36.391558 systemd-fsck[1141]: fsck.fat 4.2 (2021-01-31) Feb 9 09:42:36.391558 systemd-fsck[1141]: /dev/vda1: 236 files, 113719/258078 clusters Feb 9 09:42:36.393517 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 09:42:36.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:36.464199 ldconfig[1131]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 09:42:36.468401 systemd[1]: Finished ldconfig.service. Feb 9 09:42:36.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:36.596410 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 09:42:36.597888 systemd[1]: Mounting boot.mount... Feb 9 09:42:36.604699 systemd[1]: Mounted boot.mount. Feb 9 09:42:36.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:36.613237 systemd[1]: Finished systemd-boot-update.service. Feb 9 09:42:36.661948 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 09:42:36.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:36.663970 systemd[1]: Starting audit-rules.service... Feb 9 09:42:36.665678 systemd[1]: Starting clean-ca-certificates.service... Feb 9 09:42:36.667407 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 09:42:36.669684 systemd[1]: Starting systemd-resolved.service... Feb 9 09:42:36.671823 systemd[1]: Starting systemd-timesyncd.service... Feb 9 09:42:36.674683 systemd[1]: Starting systemd-update-utmp.service... Feb 9 09:42:36.676105 systemd[1]: Finished clean-ca-certificates.service. Feb 9 09:42:36.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:36.677777 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 09:42:36.686000 audit[1162]: SYSTEM_BOOT pid=1162 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 09:42:36.687915 systemd[1]: Finished systemd-update-utmp.service. Feb 9 09:42:36.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:36.696076 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 09:42:36.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:36.698614 systemd[1]: Starting systemd-update-done.service... Feb 9 09:42:36.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:36.706583 systemd[1]: Finished systemd-update-done.service. Feb 9 09:42:36.718000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 09:42:36.718000 audit[1175]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff20d8480 a2=420 a3=0 items=0 ppid=1150 pid=1175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:42:36.718000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 09:42:36.718844 augenrules[1175]: No rules Feb 9 09:42:36.719920 systemd[1]: Finished audit-rules.service. Feb 9 09:42:36.736330 systemd-resolved[1155]: Positive Trust Anchors: Feb 9 09:42:36.736343 systemd-resolved[1155]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:42:36.736374 systemd-resolved[1155]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:42:36.741790 systemd[1]: Started systemd-timesyncd.service. Feb 9 09:42:36.742545 systemd-timesyncd[1156]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 9 09:42:36.742602 systemd-timesyncd[1156]: Initial clock synchronization to Fri 2024-02-09 09:42:36.638472 UTC. Feb 9 09:42:36.742884 systemd[1]: Reached target time-set.target. Feb 9 09:42:36.746141 systemd-resolved[1155]: Defaulting to hostname 'linux'. Feb 9 09:42:36.747606 systemd[1]: Started systemd-resolved.service. Feb 9 09:42:36.748267 systemd[1]: Reached target network.target. Feb 9 09:42:36.748867 systemd[1]: Reached target nss-lookup.target. Feb 9 09:42:36.749500 systemd[1]: Reached target sysinit.target. Feb 9 09:42:36.750123 systemd[1]: Started motdgen.path. Feb 9 09:42:36.750690 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 09:42:36.751662 systemd[1]: Started logrotate.timer. Feb 9 09:42:36.752531 systemd[1]: Started mdadm.timer. Feb 9 09:42:36.753214 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 09:42:36.754051 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 09:42:36.754084 systemd[1]: Reached target paths.target. Feb 9 09:42:36.754811 systemd[1]: Reached target timers.target. Feb 9 09:42:36.755889 systemd[1]: Listening on dbus.socket. Feb 9 09:42:36.757859 systemd[1]: Starting docker.socket... Feb 9 09:42:36.759570 systemd[1]: Listening on sshd.socket. Feb 9 09:42:36.760457 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:42:36.760806 systemd[1]: Listening on docker.socket. Feb 9 09:42:36.761595 systemd[1]: Reached target sockets.target. Feb 9 09:42:36.762368 systemd[1]: Reached target basic.target. Feb 9 09:42:36.763268 systemd[1]: System is tainted: cgroupsv1 Feb 9 09:42:36.763333 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:42:36.763355 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:42:36.764649 systemd[1]: Starting containerd.service... Feb 9 09:42:36.766898 systemd[1]: Starting dbus.service... Feb 9 09:42:36.768934 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 09:42:36.771077 systemd[1]: Starting extend-filesystems.service... Feb 9 09:42:36.772020 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 09:42:36.773538 systemd[1]: Starting motdgen.service... Feb 9 09:42:36.775631 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 09:42:36.777633 systemd[1]: Starting prepare-critools.service... Feb 9 09:42:36.779733 systemd[1]: Starting prepare-helm.service... Feb 9 09:42:36.781641 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 09:42:36.783719 systemd[1]: Starting sshd-keygen.service... Feb 9 09:42:36.785697 jq[1188]: false Feb 9 09:42:36.786431 systemd[1]: Starting systemd-logind.service... Feb 9 09:42:36.787610 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:42:36.787678 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 09:42:36.788911 systemd[1]: Starting update-engine.service... Feb 9 09:42:36.790826 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 09:42:36.793779 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 09:42:36.794200 jq[1210]: true Feb 9 09:42:36.797480 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 09:42:36.800696 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 09:42:36.800942 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 09:42:36.817175 jq[1221]: true Feb 9 09:42:36.817487 tar[1220]: linux-arm64/helm Feb 9 09:42:36.819458 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 09:42:36.819639 tar[1216]: ./ Feb 9 09:42:36.819708 systemd[1]: Finished motdgen.service. Feb 9 09:42:36.819909 tar[1216]: ./macvlan Feb 9 09:42:36.820827 tar[1217]: crictl Feb 9 09:42:36.830769 extend-filesystems[1189]: Found vda Feb 9 09:42:36.830769 extend-filesystems[1189]: Found vda1 Feb 9 09:42:36.830769 extend-filesystems[1189]: Found vda2 Feb 9 09:42:36.830769 extend-filesystems[1189]: Found vda3 Feb 9 09:42:36.830769 extend-filesystems[1189]: Found usr Feb 9 09:42:36.830769 extend-filesystems[1189]: Found vda4 Feb 9 09:42:36.830769 extend-filesystems[1189]: Found vda6 Feb 9 09:42:36.830769 extend-filesystems[1189]: Found vda7 Feb 9 09:42:36.830769 extend-filesystems[1189]: Found vda9 Feb 9 09:42:36.847000 extend-filesystems[1189]: Checking size of /dev/vda9 Feb 9 09:42:36.852992 extend-filesystems[1189]: Resized partition /dev/vda9 Feb 9 09:42:36.856005 extend-filesystems[1251]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 09:42:36.868553 bash[1248]: Updated "/home/core/.ssh/authorized_keys" Feb 9 09:42:36.864277 dbus-daemon[1187]: [system] SELinux support is enabled Feb 9 09:42:36.861635 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 09:42:36.864518 systemd[1]: Started dbus.service. Feb 9 09:42:36.866828 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 09:42:36.866846 systemd[1]: Reached target system-config.target. Feb 9 09:42:36.868068 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 09:42:36.868083 systemd[1]: Reached target user-config.target. Feb 9 09:42:36.884641 systemd-logind[1203]: Watching system buttons on /dev/input/event0 (Power Button) Feb 9 09:42:36.884860 systemd-logind[1203]: New seat seat0. Feb 9 09:42:36.890570 systemd[1]: Started systemd-logind.service. Feb 9 09:42:36.893241 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 9 09:42:36.918310 tar[1216]: ./static Feb 9 09:42:36.919332 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 9 09:42:36.943223 extend-filesystems[1251]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 9 09:42:36.943223 extend-filesystems[1251]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 09:42:36.943223 extend-filesystems[1251]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 9 09:42:36.951360 extend-filesystems[1189]: Resized filesystem in /dev/vda9 Feb 9 09:42:36.943332 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 09:42:36.943582 systemd[1]: Finished extend-filesystems.service. Feb 9 09:42:36.961123 env[1222]: time="2024-02-09T09:42:36.961064920Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 09:42:36.971644 tar[1216]: ./vlan Feb 9 09:42:37.006842 update_engine[1206]: I0209 09:42:37.006528 1206 main.cc:92] Flatcar Update Engine starting Feb 9 09:42:37.012175 tar[1216]: ./portmap Feb 9 09:42:37.014170 systemd[1]: Started update-engine.service. Feb 9 09:42:37.016878 systemd[1]: Started locksmithd.service. Feb 9 09:42:37.018584 update_engine[1206]: I0209 09:42:37.018554 1206 update_check_scheduler.cc:74] Next update check in 8m13s Feb 9 09:42:37.043994 tar[1216]: ./host-local Feb 9 09:42:37.046076 env[1222]: time="2024-02-09T09:42:37.046041200Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 09:42:37.046654 env[1222]: time="2024-02-09T09:42:37.046630564Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:42:37.048238 env[1222]: time="2024-02-09T09:42:37.048202279Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:42:37.048238 env[1222]: time="2024-02-09T09:42:37.048236193Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:42:37.048517 env[1222]: time="2024-02-09T09:42:37.048490012Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:42:37.048517 env[1222]: time="2024-02-09T09:42:37.048514925Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 09:42:37.048582 env[1222]: time="2024-02-09T09:42:37.048536441Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 09:42:37.048582 env[1222]: time="2024-02-09T09:42:37.048548957Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 09:42:37.048649 env[1222]: time="2024-02-09T09:42:37.048629931Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:42:37.048927 env[1222]: time="2024-02-09T09:42:37.048902070Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:42:37.049079 env[1222]: time="2024-02-09T09:42:37.049058728Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:42:37.049079 env[1222]: time="2024-02-09T09:42:37.049077955Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 09:42:37.049147 env[1222]: time="2024-02-09T09:42:37.049132240Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 09:42:37.049186 env[1222]: time="2024-02-09T09:42:37.049147993Z" level=info msg="metadata content store policy set" policy=shared Feb 9 09:42:37.061363 env[1222]: time="2024-02-09T09:42:37.061317162Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 09:42:37.061461 env[1222]: time="2024-02-09T09:42:37.061368921Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 09:42:37.061461 env[1222]: time="2024-02-09T09:42:37.061383647Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 09:42:37.061461 env[1222]: time="2024-02-09T09:42:37.061418034Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 09:42:37.061461 env[1222]: time="2024-02-09T09:42:37.061432918Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 09:42:37.061461 env[1222]: time="2024-02-09T09:42:37.061446618Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 09:42:37.061461 env[1222]: time="2024-02-09T09:42:37.061460950Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 09:42:37.061873 env[1222]: time="2024-02-09T09:42:37.061849911Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 09:42:37.061916 env[1222]: time="2024-02-09T09:42:37.061885996Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 09:42:37.061916 env[1222]: time="2024-02-09T09:42:37.061900683Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 09:42:37.061962 env[1222]: time="2024-02-09T09:42:37.061915409Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 09:42:37.061962 env[1222]: time="2024-02-09T09:42:37.061928516Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 09:42:37.062087 env[1222]: time="2024-02-09T09:42:37.062064803Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 09:42:37.062161 env[1222]: time="2024-02-09T09:42:37.062144908Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 09:42:37.062531 env[1222]: time="2024-02-09T09:42:37.062508721Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 09:42:37.062580 env[1222]: time="2024-02-09T09:42:37.062542081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 09:42:37.062580 env[1222]: time="2024-02-09T09:42:37.062555781Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 09:42:37.062675 env[1222]: time="2024-02-09T09:42:37.062661826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 09:42:37.062710 env[1222]: time="2024-02-09T09:42:37.062677578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 09:42:37.062710 env[1222]: time="2024-02-09T09:42:37.062690449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 09:42:37.062710 env[1222]: time="2024-02-09T09:42:37.062703004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 09:42:37.062777 env[1222]: time="2024-02-09T09:42:37.062714453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 09:42:37.062777 env[1222]: time="2024-02-09T09:42:37.062727797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 09:42:37.062777 env[1222]: time="2024-02-09T09:42:37.062738339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 09:42:37.062777 env[1222]: time="2024-02-09T09:42:37.062748959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 09:42:37.062777 env[1222]: time="2024-02-09T09:42:37.062761672Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 09:42:37.062915 env[1222]: time="2024-02-09T09:42:37.062886114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 09:42:37.062915 env[1222]: time="2024-02-09T09:42:37.062902775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 09:42:37.062915 env[1222]: time="2024-02-09T09:42:37.062913908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 09:42:37.062989 env[1222]: time="2024-02-09T09:42:37.062930213Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 09:42:37.062989 env[1222]: time="2024-02-09T09:42:37.062952796Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 09:42:37.062989 env[1222]: time="2024-02-09T09:42:37.062964246Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 09:42:37.062989 env[1222]: time="2024-02-09T09:42:37.062980117Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 09:42:37.063063 env[1222]: time="2024-02-09T09:42:37.063012688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 09:42:37.063519 env[1222]: time="2024-02-09T09:42:37.063389292Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 09:42:37.067831 env[1222]: time="2024-02-09T09:42:37.063524552Z" level=info msg="Connect containerd service" Feb 9 09:42:37.067831 env[1222]: time="2024-02-09T09:42:37.063556491Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 09:42:37.067831 env[1222]: time="2024-02-09T09:42:37.064228290Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 09:42:37.067831 env[1222]: time="2024-02-09T09:42:37.064595498Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 09:42:37.067831 env[1222]: time="2024-02-09T09:42:37.064648204Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 09:42:37.067831 env[1222]: time="2024-02-09T09:42:37.064691277Z" level=info msg="containerd successfully booted in 0.104537s" Feb 9 09:42:37.067831 env[1222]: time="2024-02-09T09:42:37.067362957Z" level=info msg="Start subscribing containerd event" Feb 9 09:42:37.067831 env[1222]: time="2024-02-09T09:42:37.067418506Z" level=info msg="Start recovering state" Feb 9 09:42:37.067831 env[1222]: time="2024-02-09T09:42:37.067495295Z" level=info msg="Start event monitor" Feb 9 09:42:37.067831 env[1222]: time="2024-02-09T09:42:37.067514641Z" level=info msg="Start snapshots syncer" Feb 9 09:42:37.067831 env[1222]: time="2024-02-09T09:42:37.067524827Z" level=info msg="Start cni network conf syncer for default" Feb 9 09:42:37.067831 env[1222]: time="2024-02-09T09:42:37.067532486Z" level=info msg="Start streaming server" Feb 9 09:42:37.066110 systemd[1]: Started containerd.service. Feb 9 09:42:37.071139 tar[1216]: ./vrf Feb 9 09:42:37.100902 tar[1216]: ./bridge Feb 9 09:42:37.135033 tar[1216]: ./tuning Feb 9 09:42:37.163167 tar[1216]: ./firewall Feb 9 09:42:37.198155 tar[1216]: ./host-device Feb 9 09:42:37.229385 tar[1216]: ./sbr Feb 9 09:42:37.257430 tar[1216]: ./loopback Feb 9 09:42:37.284859 tar[1216]: ./dhcp Feb 9 09:42:37.336795 systemd[1]: Finished prepare-critools.service. Feb 9 09:42:37.364531 tar[1216]: ./ptp Feb 9 09:42:37.375986 tar[1220]: linux-arm64/LICENSE Feb 9 09:42:37.376106 tar[1220]: linux-arm64/README.md Feb 9 09:42:37.383149 systemd[1]: Finished prepare-helm.service. Feb 9 09:42:37.397339 tar[1216]: ./ipvlan Feb 9 09:42:37.408364 locksmithd[1260]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 09:42:37.430140 tar[1216]: ./bandwidth Feb 9 09:42:37.471775 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 09:42:37.912033 sshd_keygen[1218]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 09:42:37.929015 systemd[1]: Finished sshd-keygen.service. Feb 9 09:42:37.931342 systemd[1]: Starting issuegen.service... Feb 9 09:42:37.935821 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 09:42:37.936021 systemd[1]: Finished issuegen.service. Feb 9 09:42:37.938069 systemd[1]: Starting systemd-user-sessions.service... Feb 9 09:42:37.944929 systemd[1]: Finished systemd-user-sessions.service. Feb 9 09:42:37.947200 systemd[1]: Started getty@tty1.service. Feb 9 09:42:37.949278 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 9 09:42:37.950363 systemd[1]: Reached target getty.target. Feb 9 09:42:37.951176 systemd[1]: Reached target multi-user.target. Feb 9 09:42:37.953236 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 09:42:37.959487 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 09:42:37.959703 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 09:42:37.960762 systemd[1]: Startup finished in 14.699s (kernel) + 4.260s (userspace) = 18.960s. Feb 9 09:42:37.976383 systemd-networkd[1095]: eth0: Gained IPv6LL Feb 9 09:42:41.870074 systemd[1]: Created slice system-sshd.slice. Feb 9 09:42:41.871613 systemd[1]: Started sshd@0-10.0.0.11:22-10.0.0.1:46554.service. Feb 9 09:42:41.914930 sshd[1297]: Accepted publickey for core from 10.0.0.1 port 46554 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:42:41.917183 sshd[1297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:42:41.929703 systemd-logind[1203]: New session 1 of user core. Feb 9 09:42:41.932584 systemd[1]: Created slice user-500.slice. Feb 9 09:42:41.934013 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 09:42:41.943190 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 09:42:41.945286 systemd[1]: Starting user@500.service... Feb 9 09:42:41.948207 (systemd)[1302]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:42:42.005436 systemd[1302]: Queued start job for default target default.target. Feb 9 09:42:42.005661 systemd[1302]: Reached target paths.target. Feb 9 09:42:42.005676 systemd[1302]: Reached target sockets.target. Feb 9 09:42:42.005687 systemd[1302]: Reached target timers.target. Feb 9 09:42:42.005709 systemd[1302]: Reached target basic.target. Feb 9 09:42:42.005754 systemd[1302]: Reached target default.target. Feb 9 09:42:42.005780 systemd[1302]: Startup finished in 52ms. Feb 9 09:42:42.006061 systemd[1]: Started user@500.service. Feb 9 09:42:42.007004 systemd[1]: Started session-1.scope. Feb 9 09:42:42.056919 systemd[1]: Started sshd@1-10.0.0.11:22-10.0.0.1:46568.service. Feb 9 09:42:42.103113 sshd[1311]: Accepted publickey for core from 10.0.0.1 port 46568 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:42:42.104244 sshd[1311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:42:42.107565 systemd-logind[1203]: New session 2 of user core. Feb 9 09:42:42.108337 systemd[1]: Started session-2.scope. Feb 9 09:42:42.163058 sshd[1311]: pam_unix(sshd:session): session closed for user core Feb 9 09:42:42.165093 systemd[1]: Started sshd@2-10.0.0.11:22-10.0.0.1:46580.service. Feb 9 09:42:42.165665 systemd[1]: sshd@1-10.0.0.11:22-10.0.0.1:46568.service: Deactivated successfully. Feb 9 09:42:42.166547 systemd-logind[1203]: Session 2 logged out. Waiting for processes to exit. Feb 9 09:42:42.166577 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 09:42:42.167385 systemd-logind[1203]: Removed session 2. Feb 9 09:42:42.201426 sshd[1316]: Accepted publickey for core from 10.0.0.1 port 46580 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:42:42.202582 sshd[1316]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:42:42.205563 systemd-logind[1203]: New session 3 of user core. Feb 9 09:42:42.206341 systemd[1]: Started session-3.scope. Feb 9 09:42:42.254590 sshd[1316]: pam_unix(sshd:session): session closed for user core Feb 9 09:42:42.256677 systemd[1]: Started sshd@3-10.0.0.11:22-10.0.0.1:46584.service. Feb 9 09:42:42.257078 systemd[1]: sshd@2-10.0.0.11:22-10.0.0.1:46580.service: Deactivated successfully. Feb 9 09:42:42.258114 systemd-logind[1203]: Session 3 logged out. Waiting for processes to exit. Feb 9 09:42:42.258151 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 09:42:42.259347 systemd-logind[1203]: Removed session 3. Feb 9 09:42:42.291684 sshd[1323]: Accepted publickey for core from 10.0.0.1 port 46584 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:42:42.292732 sshd[1323]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:42:42.295752 systemd-logind[1203]: New session 4 of user core. Feb 9 09:42:42.296503 systemd[1]: Started session-4.scope. Feb 9 09:42:42.349804 sshd[1323]: pam_unix(sshd:session): session closed for user core Feb 9 09:42:42.351736 systemd[1]: Started sshd@4-10.0.0.11:22-10.0.0.1:46588.service. Feb 9 09:42:42.352673 systemd[1]: sshd@3-10.0.0.11:22-10.0.0.1:46584.service: Deactivated successfully. Feb 9 09:42:42.353671 systemd-logind[1203]: Session 4 logged out. Waiting for processes to exit. Feb 9 09:42:42.353884 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 09:42:42.354768 systemd-logind[1203]: Removed session 4. Feb 9 09:42:42.386654 sshd[1330]: Accepted publickey for core from 10.0.0.1 port 46588 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:42:42.387979 sshd[1330]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:42:42.390955 systemd-logind[1203]: New session 5 of user core. Feb 9 09:42:42.392837 systemd[1]: Started session-5.scope. Feb 9 09:42:42.452026 sudo[1336]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 9 09:42:42.452242 sudo[1336]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 09:42:42.463495 dbus-daemon[1187]: avc: received setenforce notice (enforcing=1) Feb 9 09:42:42.465423 sudo[1336]: pam_unix(sudo:session): session closed for user root Feb 9 09:42:42.467176 sshd[1330]: pam_unix(sshd:session): session closed for user core Feb 9 09:42:42.470212 systemd[1]: sshd@4-10.0.0.11:22-10.0.0.1:46588.service: Deactivated successfully. Feb 9 09:42:42.471251 systemd-logind[1203]: Session 5 logged out. Waiting for processes to exit. Feb 9 09:42:42.472811 systemd[1]: Started sshd@5-10.0.0.11:22-10.0.0.1:46596.service. Feb 9 09:42:42.473702 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 09:42:42.474541 systemd-logind[1203]: Removed session 5. Feb 9 09:42:42.507811 sshd[1340]: Accepted publickey for core from 10.0.0.1 port 46596 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:42:42.508933 sshd[1340]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:42:42.512244 systemd-logind[1203]: New session 6 of user core. Feb 9 09:42:42.514135 systemd[1]: Started session-6.scope. Feb 9 09:42:42.565364 sudo[1345]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 9 09:42:42.565567 sudo[1345]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 09:42:42.568010 sudo[1345]: pam_unix(sudo:session): session closed for user root Feb 9 09:42:42.571956 sudo[1344]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 9 09:42:42.572161 sudo[1344]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 09:42:42.579827 systemd[1]: Stopping audit-rules.service... Feb 9 09:42:42.579000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 9 09:42:42.580970 auditctl[1348]: No rules Feb 9 09:42:42.581253 systemd[1]: audit-rules.service: Deactivated successfully. Feb 9 09:42:42.581362 kernel: kauditd_printk_skb: 97 callbacks suppressed Feb 9 09:42:42.581397 kernel: audit: type=1305 audit(1707471762.579:130): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 9 09:42:42.581476 systemd[1]: Stopped audit-rules.service. Feb 9 09:42:42.579000 audit[1348]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffffdf117f0 a2=420 a3=0 items=0 ppid=1 pid=1348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:42:42.582856 systemd[1]: Starting audit-rules.service... Feb 9 09:42:42.585164 kernel: audit: type=1300 audit(1707471762.579:130): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffffdf117f0 a2=420 a3=0 items=0 ppid=1 pid=1348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:42:42.585199 kernel: audit: type=1327 audit(1707471762.579:130): proctitle=2F7362696E2F617564697463746C002D44 Feb 9 09:42:42.579000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Feb 9 09:42:42.586205 kernel: audit: type=1131 audit(1707471762.580:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:42.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:42.597107 augenrules[1366]: No rules Feb 9 09:42:42.597989 systemd[1]: Finished audit-rules.service. Feb 9 09:42:42.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:42.600189 sudo[1344]: pam_unix(sudo:session): session closed for user root Feb 9 09:42:42.600309 kernel: audit: type=1130 audit(1707471762.596:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:42.599000 audit[1344]: USER_END pid=1344 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:42:42.599000 audit[1344]: CRED_DISP pid=1344 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:42:42.603572 sshd[1340]: pam_unix(sshd:session): session closed for user core Feb 9 09:42:42.605382 kernel: audit: type=1106 audit(1707471762.599:133): pid=1344 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:42:42.605409 kernel: audit: type=1104 audit(1707471762.599:134): pid=1344 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:42:42.605423 kernel: audit: type=1106 audit(1707471762.602:135): pid=1340 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:42:42.602000 audit[1340]: USER_END pid=1340 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:42:42.606400 systemd[1]: Started sshd@6-10.0.0.11:22-10.0.0.1:46612.service. Feb 9 09:42:42.607025 systemd-logind[1203]: Session 6 logged out. Waiting for processes to exit. Feb 9 09:42:42.607103 systemd[1]: sshd@5-10.0.0.11:22-10.0.0.1:46596.service: Deactivated successfully. Feb 9 09:42:42.607783 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 09:42:42.602000 audit[1340]: CRED_DISP pid=1340 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:42:42.608601 systemd-logind[1203]: Removed session 6. Feb 9 09:42:42.610060 kernel: audit: type=1104 audit(1707471762.602:136): pid=1340 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:42:42.610101 kernel: audit: type=1130 audit(1707471762.604:137): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.11:22-10.0.0.1:46612 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:42.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.11:22-10.0.0.1:46612 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:42.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.11:22-10.0.0.1:46596 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:42.639000 audit[1371]: USER_ACCT pid=1371 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:42:42.640909 sshd[1371]: Accepted publickey for core from 10.0.0.1 port 46612 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:42:42.640000 audit[1371]: CRED_ACQ pid=1371 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:42:42.640000 audit[1371]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdbbf6990 a2=3 a3=1 items=0 ppid=1 pid=1371 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:42:42.640000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 09:42:42.642164 sshd[1371]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:42:42.645276 systemd-logind[1203]: New session 7 of user core. Feb 9 09:42:42.645603 systemd[1]: Started session-7.scope. Feb 9 09:42:42.648000 audit[1371]: USER_START pid=1371 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:42:42.649000 audit[1376]: CRED_ACQ pid=1376 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:42:42.696000 audit[1377]: USER_ACCT pid=1377 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:42:42.697099 sudo[1377]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 09:42:42.696000 audit[1377]: CRED_REFR pid=1377 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:42:42.697337 sudo[1377]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 09:42:42.698000 audit[1377]: USER_START pid=1377 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:42:43.283417 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 09:42:43.290949 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 09:42:43.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:43.291234 systemd[1]: Reached target network-online.target. Feb 9 09:42:43.292669 systemd[1]: Starting docker.service... Feb 9 09:42:43.408048 env[1396]: time="2024-02-09T09:42:43.407985176Z" level=info msg="Starting up" Feb 9 09:42:43.409699 env[1396]: time="2024-02-09T09:42:43.409675504Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 09:42:43.409699 env[1396]: time="2024-02-09T09:42:43.409698529Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 09:42:43.409846 env[1396]: time="2024-02-09T09:42:43.409717220Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 09:42:43.409846 env[1396]: time="2024-02-09T09:42:43.409727480Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 09:42:43.411915 env[1396]: time="2024-02-09T09:42:43.411889204Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 09:42:43.411998 env[1396]: time="2024-02-09T09:42:43.411984604Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 09:42:43.412060 env[1396]: time="2024-02-09T09:42:43.412043857Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 09:42:43.412120 env[1396]: time="2024-02-09T09:42:43.412107762Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 09:42:43.638129 env[1396]: time="2024-02-09T09:42:43.638032506Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 9 09:42:43.638487 env[1396]: time="2024-02-09T09:42:43.638281566Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 9 09:42:43.638742 env[1396]: time="2024-02-09T09:42:43.638725483Z" level=info msg="Loading containers: start." Feb 9 09:42:43.685000 audit[1430]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1430 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:42:43.685000 audit[1430]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffc6255940 a2=0 a3=1 items=0 ppid=1396 pid=1430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:42:43.685000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Feb 9 09:42:43.687000 audit[1432]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1432 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:42:43.687000 audit[1432]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffc9ab46b0 a2=0 a3=1 items=0 ppid=1396 pid=1432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:42:43.687000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Feb 9 09:42:43.688000 audit[1434]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1434 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:42:43.688000 audit[1434]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffce04c920 a2=0 a3=1 items=0 ppid=1396 pid=1434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:42:43.688000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Feb 9 09:42:43.690000 audit[1436]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1436 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:42:43.690000 audit[1436]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc8a92940 a2=0 a3=1 items=0 ppid=1396 pid=1436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:42:43.690000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Feb 9 09:42:43.692000 audit[1438]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1438 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:42:43.692000 audit[1438]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffffb40a9f0 a2=0 a3=1 items=0 ppid=1396 pid=1438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:42:43.692000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Feb 9 09:42:43.719000 audit[1443]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1443 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:42:43.719000 audit[1443]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffdf331850 a2=0 a3=1 items=0 ppid=1396 pid=1443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:42:43.719000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Feb 9 09:42:43.731000 audit[1445]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1445 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:42:43.731000 audit[1445]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff1bf1ae0 a2=0 a3=1 items=0 ppid=1396 pid=1445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:42:43.731000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Feb 9 09:42:43.732000 audit[1447]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1447 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:42:43.732000 audit[1447]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffff53d340 a2=0 a3=1 items=0 ppid=1396 pid=1447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:42:43.732000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Feb 9 09:42:43.735000 audit[1449]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1449 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:42:43.735000 audit[1449]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=ffffc30f0790 a2=0 a3=1 items=0 ppid=1396 pid=1449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:42:43.735000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 9 09:42:43.744000 audit[1453]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1453 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:42:43.744000 audit[1453]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffe7232180 a2=0 a3=1 items=0 ppid=1396 pid=1453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:42:43.744000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Feb 9 09:42:43.745000 audit[1454]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1454 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:42:43.745000 audit[1454]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffe53cf2a0 a2=0 a3=1 items=0 ppid=1396 pid=1454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:42:43.745000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 9 09:42:43.753853 kernel: Initializing XFRM netlink socket Feb 9 09:42:43.778711 env[1396]: time="2024-02-09T09:42:43.778659737Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 09:42:43.797000 audit[1462]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1462 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:42:43.797000 audit[1462]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=fffff77827b0 a2=0 a3=1 items=0 ppid=1396 pid=1462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:42:43.797000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Feb 9 09:42:43.813000 audit[1465]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1465 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:42:43.813000 audit[1465]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffc2fcfcb0 a2=0 a3=1 items=0 ppid=1396 pid=1465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:42:43.813000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Feb 9 09:42:43.817000 audit[1468]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1468 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:42:43.817000 audit[1468]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=fffff5ecaef0 a2=0 a3=1 items=0 ppid=1396 pid=1468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:42:43.817000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Feb 9 09:42:43.822000 audit[1470]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1470 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:42:43.822000 audit[1470]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=fffffc2b32d0 a2=0 a3=1 items=0 ppid=1396 pid=1470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:42:43.822000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Feb 9 09:42:43.824000 audit[1472]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1472 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:42:43.824000 audit[1472]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=ffffcf4d4620 a2=0 a3=1 items=0 ppid=1396 pid=1472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:42:43.824000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Feb 9 09:42:43.827000 audit[1474]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1474 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:42:43.827000 audit[1474]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=fffff7192860 a2=0 a3=1 items=0 ppid=1396 pid=1474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:42:43.827000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Feb 9 09:42:43.829000 audit[1476]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1476 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:42:43.829000 audit[1476]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=fffffb9b2ec0 a2=0 a3=1 items=0 ppid=1396 pid=1476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:42:43.829000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Feb 9 09:42:43.843000 audit[1479]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1479 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:42:43.843000 audit[1479]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=ffffed0faee0 a2=0 a3=1 items=0 ppid=1396 pid=1479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:42:43.843000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Feb 9 09:42:43.844000 audit[1481]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1481 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:42:43.844000 audit[1481]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=fffff20aa550 a2=0 a3=1 items=0 ppid=1396 pid=1481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:42:43.844000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Feb 9 09:42:43.846000 audit[1483]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1483 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:42:43.846000 audit[1483]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=fffffb713170 a2=0 a3=1 items=0 ppid=1396 pid=1483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:42:43.846000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Feb 9 09:42:43.848000 audit[1485]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1485 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:42:43.848000 audit[1485]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffeae1fab0 a2=0 a3=1 items=0 ppid=1396 pid=1485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:42:43.848000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Feb 9 09:42:43.849475 systemd-networkd[1095]: docker0: Link UP Feb 9 09:42:43.857000 audit[1489]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1489 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:42:43.857000 audit[1489]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe1e7a7d0 a2=0 a3=1 items=0 ppid=1396 pid=1489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:42:43.857000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Feb 9 09:42:43.861000 audit[1490]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1490 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:42:43.861000 audit[1490]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffff0633310 a2=0 a3=1 items=0 ppid=1396 pid=1490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:42:43.861000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 9 09:42:43.862755 env[1396]: time="2024-02-09T09:42:43.862704794Z" level=info msg="Loading containers: done." Feb 9 09:42:43.885704 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2044929103-merged.mount: Deactivated successfully. Feb 9 09:42:43.890421 env[1396]: time="2024-02-09T09:42:43.890323126Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 09:42:43.890512 env[1396]: time="2024-02-09T09:42:43.890496669Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 09:42:43.890867 env[1396]: time="2024-02-09T09:42:43.890787086Z" level=info msg="Daemon has completed initialization" Feb 9 09:42:43.909580 systemd[1]: Started docker.service. Feb 9 09:42:43.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:43.916420 env[1396]: time="2024-02-09T09:42:43.916368044Z" level=info msg="API listen on /run/docker.sock" Feb 9 09:42:43.939635 systemd[1]: Reloading. Feb 9 09:42:43.979754 /usr/lib/systemd/system-generators/torcx-generator[1540]: time="2024-02-09T09:42:43Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:42:43.980134 /usr/lib/systemd/system-generators/torcx-generator[1540]: time="2024-02-09T09:42:43Z" level=info msg="torcx already run" Feb 9 09:42:44.043349 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:42:44.043369 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:42:44.060501 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:42:44.118044 systemd[1]: Started kubelet.service. Feb 9 09:42:44.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:44.313674 kubelet[1582]: E0209 09:42:44.313612 1582 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 09:42:44.316255 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:42:44.316448 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:42:44.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 09:42:44.562250 env[1222]: time="2024-02-09T09:42:44.562203408Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 09:42:45.268281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3304627197.mount: Deactivated successfully. Feb 9 09:42:46.862845 env[1222]: time="2024-02-09T09:42:46.862790450Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:46.865279 env[1222]: time="2024-02-09T09:42:46.865242476Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:46.869838 env[1222]: time="2024-02-09T09:42:46.869801601Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:46.871569 env[1222]: time="2024-02-09T09:42:46.871527873Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:46.872331 env[1222]: time="2024-02-09T09:42:46.872278490Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88\"" Feb 9 09:42:46.881551 env[1222]: time="2024-02-09T09:42:46.881501051Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 09:42:48.935556 env[1222]: time="2024-02-09T09:42:48.935498035Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:48.938025 env[1222]: time="2024-02-09T09:42:48.937991557Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:48.939430 env[1222]: time="2024-02-09T09:42:48.939405676Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:48.941101 env[1222]: time="2024-02-09T09:42:48.941072996Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:48.943150 env[1222]: time="2024-02-09T09:42:48.943113197Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2\"" Feb 9 09:42:48.952807 env[1222]: time="2024-02-09T09:42:48.952763496Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 09:42:50.259887 env[1222]: time="2024-02-09T09:42:50.259838574Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:50.261335 env[1222]: time="2024-02-09T09:42:50.261311714Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:50.263009 env[1222]: time="2024-02-09T09:42:50.262976295Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:50.264418 env[1222]: time="2024-02-09T09:42:50.264391927Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:50.265907 env[1222]: time="2024-02-09T09:42:50.265881110Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a\"" Feb 9 09:42:50.275764 env[1222]: time="2024-02-09T09:42:50.275713868Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 09:42:51.381999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount465906789.mount: Deactivated successfully. Feb 9 09:42:51.848694 env[1222]: time="2024-02-09T09:42:51.848650586Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:51.850045 env[1222]: time="2024-02-09T09:42:51.850017642Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:51.851849 env[1222]: time="2024-02-09T09:42:51.851820264Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:51.853376 env[1222]: time="2024-02-09T09:42:51.853346920Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:51.854004 env[1222]: time="2024-02-09T09:42:51.853972784Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926\"" Feb 9 09:42:51.863463 env[1222]: time="2024-02-09T09:42:51.863413555Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 09:42:52.350715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3782505404.mount: Deactivated successfully. Feb 9 09:42:52.354306 env[1222]: time="2024-02-09T09:42:52.354255014Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:52.355955 env[1222]: time="2024-02-09T09:42:52.355914340Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:52.357374 env[1222]: time="2024-02-09T09:42:52.357348183Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:52.358742 env[1222]: time="2024-02-09T09:42:52.358709912Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:52.359185 env[1222]: time="2024-02-09T09:42:52.359153253Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 9 09:42:52.367710 env[1222]: time="2024-02-09T09:42:52.367672935Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 09:42:53.179600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3571246483.mount: Deactivated successfully. Feb 9 09:42:54.406768 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 09:42:54.413030 kernel: kauditd_printk_skb: 87 callbacks suppressed Feb 9 09:42:54.413070 kernel: audit: type=1130 audit(1707471774.405:175): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:54.413093 kernel: audit: type=1131 audit(1707471774.405:176): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:54.413115 kernel: audit: type=1130 audit(1707471774.406:177): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:54.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:54.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:54.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:42:54.406946 systemd[1]: Stopped kubelet.service. Feb 9 09:42:54.408461 systemd[1]: Started kubelet.service. Feb 9 09:42:54.457646 kubelet[1638]: E0209 09:42:54.457589 1638 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 09:42:54.460499 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:42:54.460636 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:42:54.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 09:42:54.463324 kernel: audit: type=1131 audit(1707471774.459:178): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 09:42:54.986407 env[1222]: time="2024-02-09T09:42:54.986363424Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:54.987838 env[1222]: time="2024-02-09T09:42:54.987809001Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:54.989338 env[1222]: time="2024-02-09T09:42:54.989308467Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:54.990959 env[1222]: time="2024-02-09T09:42:54.990931086Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:54.991672 env[1222]: time="2024-02-09T09:42:54.991646804Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb\"" Feb 9 09:42:55.000212 env[1222]: time="2024-02-09T09:42:55.000185251Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 09:42:55.619837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2506397911.mount: Deactivated successfully. Feb 9 09:42:56.431104 env[1222]: time="2024-02-09T09:42:56.430980717Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:56.440112 env[1222]: time="2024-02-09T09:42:56.440068770Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:56.445900 env[1222]: time="2024-02-09T09:42:56.445873520Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:56.447597 env[1222]: time="2024-02-09T09:42:56.447568777Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:42:56.448033 env[1222]: time="2024-02-09T09:42:56.447979354Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0\"" Feb 9 09:43:01.703609 systemd[1]: Stopped kubelet.service. Feb 9 09:43:01.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:01.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:01.707597 kernel: audit: type=1130 audit(1707471781.703:179): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:01.707656 kernel: audit: type=1131 audit(1707471781.703:180): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:01.718407 systemd[1]: Reloading. Feb 9 09:43:01.759913 /usr/lib/systemd/system-generators/torcx-generator[1741]: time="2024-02-09T09:43:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:43:01.759945 /usr/lib/systemd/system-generators/torcx-generator[1741]: time="2024-02-09T09:43:01Z" level=info msg="torcx already run" Feb 9 09:43:01.816623 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:43:01.816641 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:43:01.833538 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:43:01.892340 systemd[1]: Started kubelet.service. Feb 9 09:43:01.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:01.895317 kernel: audit: type=1130 audit(1707471781.892:181): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:01.936973 kubelet[1784]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:43:01.936973 kubelet[1784]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:43:01.937371 kubelet[1784]: I0209 09:43:01.937141 1784 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 09:43:01.938350 kubelet[1784]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:43:01.938350 kubelet[1784]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:43:02.964413 kubelet[1784]: I0209 09:43:02.964368 1784 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 09:43:02.964413 kubelet[1784]: I0209 09:43:02.964398 1784 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 09:43:02.964774 kubelet[1784]: I0209 09:43:02.964618 1784 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 09:43:02.969160 kubelet[1784]: I0209 09:43:02.969140 1784 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:43:02.969337 kubelet[1784]: E0209 09:43:02.969172 1784 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.11:6443: connect: connection refused Feb 9 09:43:02.970833 kubelet[1784]: W0209 09:43:02.970814 1784 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 09:43:02.971626 kubelet[1784]: I0209 09:43:02.971602 1784 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 09:43:02.972039 kubelet[1784]: I0209 09:43:02.972029 1784 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 09:43:02.972107 kubelet[1784]: I0209 09:43:02.972094 1784 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 09:43:02.972187 kubelet[1784]: I0209 09:43:02.972174 1784 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 09:43:02.972187 kubelet[1784]: I0209 09:43:02.972185 1784 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 09:43:02.972358 kubelet[1784]: I0209 09:43:02.972346 1784 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:43:02.976159 kubelet[1784]: I0209 09:43:02.976136 1784 kubelet.go:398] "Attempting to sync node with API server" Feb 9 09:43:02.976159 kubelet[1784]: I0209 09:43:02.976159 1784 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 09:43:02.976387 kubelet[1784]: I0209 09:43:02.976371 1784 kubelet.go:297] "Adding apiserver pod source" Feb 9 09:43:02.976387 kubelet[1784]: I0209 09:43:02.976388 1784 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 09:43:02.976935 kubelet[1784]: W0209 09:43:02.976889 1784 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.11:6443: connect: connection refused Feb 9 09:43:02.976935 kubelet[1784]: E0209 09:43:02.976937 1784 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.11:6443: connect: connection refused Feb 9 09:43:02.977208 kubelet[1784]: W0209 09:43:02.977172 1784 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.11:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.11:6443: connect: connection refused Feb 9 09:43:02.977208 kubelet[1784]: E0209 09:43:02.977209 1784 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.11:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.11:6443: connect: connection refused Feb 9 09:43:02.977427 kubelet[1784]: I0209 09:43:02.977409 1784 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 09:43:02.978289 kubelet[1784]: W0209 09:43:02.978270 1784 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 09:43:02.978877 kubelet[1784]: I0209 09:43:02.978856 1784 server.go:1186] "Started kubelet" Feb 9 09:43:02.979135 kubelet[1784]: I0209 09:43:02.979052 1784 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 09:43:02.980063 kubelet[1784]: E0209 09:43:02.979954 1784 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2288a4074d3de", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 43, 2, 978835422, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 43, 2, 978835422, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.11:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.11:6443: connect: connection refused'(may retry after sleeping) Feb 9 09:43:02.980333 kubelet[1784]: E0209 09:43:02.980283 1784 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 09:43:02.980333 kubelet[1784]: E0209 09:43:02.980324 1784 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 09:43:02.979000 audit[1784]: AVC avc: denied { mac_admin } for pid=1784 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:02.981214 kubelet[1784]: I0209 09:43:02.981194 1784 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 9 09:43:02.981331 kubelet[1784]: I0209 09:43:02.981318 1784 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 9 09:43:02.981451 kubelet[1784]: I0209 09:43:02.981438 1784 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 09:43:02.979000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 09:43:02.983657 kubelet[1784]: I0209 09:43:02.983627 1784 server.go:451] "Adding debug handlers to kubelet server" Feb 9 09:43:02.983809 kernel: audit: type=1400 audit(1707471782.979:182): avc: denied { mac_admin } for pid=1784 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:02.983852 kernel: audit: type=1401 audit(1707471782.979:182): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 09:43:02.983875 kernel: audit: type=1300 audit(1707471782.979:182): arch=c00000b7 syscall=5 success=no exit=-22 a0=40010340c0 a1=4000d82888 a2=4001034090 a3=25 items=0 ppid=1 pid=1784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:02.979000 audit[1784]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40010340c0 a1=4000d82888 a2=4001034090 a3=25 items=0 ppid=1 pid=1784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:02.985666 kubelet[1784]: I0209 09:43:02.985637 1784 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 09:43:02.985744 kubelet[1784]: I0209 09:43:02.985717 1784 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 09:43:02.986345 kernel: audit: type=1327 audit(1707471782.979:182): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 09:43:02.979000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 09:43:02.987583 kubelet[1784]: E0209 09:43:02.987542 1784 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.11:6443: connect: connection refused Feb 9 09:43:02.987781 kubelet[1784]: W0209 09:43:02.987748 1784 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.11:6443: connect: connection refused Feb 9 09:43:02.987874 kubelet[1784]: E0209 09:43:02.987861 1784 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.11:6443: connect: connection refused Feb 9 09:43:02.988746 kernel: audit: type=1400 audit(1707471782.979:183): avc: denied { mac_admin } for pid=1784 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:02.979000 audit[1784]: AVC avc: denied { mac_admin } for pid=1784 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:02.979000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 09:43:02.991328 kernel: audit: type=1401 audit(1707471782.979:183): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 09:43:02.991395 kernel: audit: type=1300 audit(1707471782.979:183): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000956a20 a1=4000d828a0 a2=4001034150 a3=25 items=0 ppid=1 pid=1784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:02.979000 audit[1784]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000956a20 a1=4000d828a0 a2=4001034150 a3=25 items=0 ppid=1 pid=1784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:02.979000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 09:43:02.982000 audit[1796]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1796 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:43:02.982000 audit[1796]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc5e4a6c0 a2=0 a3=1 items=0 ppid=1784 pid=1796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:02.982000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 9 09:43:02.989000 audit[1797]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1797 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:43:02.989000 audit[1797]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe0700820 a2=0 a3=1 items=0 ppid=1784 pid=1797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:02.989000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 9 09:43:02.991000 audit[1800]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1800 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:43:02.991000 audit[1800]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe16ea020 a2=0 a3=1 items=0 ppid=1784 pid=1800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:02.991000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 9 09:43:02.993000 audit[1803]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1803 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:43:02.993000 audit[1803]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc0103d70 a2=0 a3=1 items=0 ppid=1784 pid=1803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:02.993000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 9 09:43:03.000000 audit[1807]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1807 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:43:03.000000 audit[1807]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=fffffcfae1f0 a2=0 a3=1 items=0 ppid=1784 pid=1807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:03.000000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Feb 9 09:43:03.002000 audit[1808]: NETFILTER_CFG table=nat:31 family=2 entries=1 op=nft_register_chain pid=1808 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:43:03.002000 audit[1808]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffcde263e0 a2=0 a3=1 items=0 ppid=1784 pid=1808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:03.002000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 9 09:43:03.008000 audit[1811]: NETFILTER_CFG table=nat:32 family=2 entries=1 op=nft_register_rule pid=1811 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:43:03.008000 audit[1811]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffd7d17b90 a2=0 a3=1 items=0 ppid=1784 pid=1811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:03.008000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 9 09:43:03.011000 audit[1814]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=1814 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:43:03.011000 audit[1814]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=ffffdfdb9cc0 a2=0 a3=1 items=0 ppid=1784 pid=1814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:03.011000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 9 09:43:03.012000 audit[1815]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1815 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:43:03.012000 audit[1815]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffcfe0b070 a2=0 a3=1 items=0 ppid=1784 pid=1815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:03.012000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 9 09:43:03.013000 audit[1816]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_chain pid=1816 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:43:03.013000 audit[1816]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffa523790 a2=0 a3=1 items=0 ppid=1784 pid=1816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:03.013000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 9 09:43:03.015000 audit[1818]: NETFILTER_CFG table=nat:36 family=2 entries=1 op=nft_register_rule pid=1818 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:43:03.015000 audit[1818]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffdffa8090 a2=0 a3=1 items=0 ppid=1784 pid=1818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:03.015000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 9 09:43:03.018000 audit[1822]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_rule pid=1822 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:43:03.018000 audit[1822]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffd6700ab0 a2=0 a3=1 items=0 ppid=1784 pid=1822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:03.018000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 9 09:43:03.019900 kubelet[1784]: I0209 09:43:03.019883 1784 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 09:43:03.019985 kubelet[1784]: I0209 09:43:03.019975 1784 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 09:43:03.020041 kubelet[1784]: I0209 09:43:03.020032 1784 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:43:03.022028 kubelet[1784]: I0209 09:43:03.022006 1784 policy_none.go:49] "None policy: Start" Feb 9 09:43:03.022583 kubelet[1784]: I0209 09:43:03.022565 1784 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 09:43:03.022718 kubelet[1784]: I0209 09:43:03.022709 1784 state_mem.go:35] "Initializing new in-memory state store" Feb 9 09:43:03.022000 audit[1824]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=1824 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:43:03.022000 audit[1824]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=fffffb4f3140 a2=0 a3=1 items=0 ppid=1784 pid=1824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:03.022000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 9 09:43:03.025000 audit[1826]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_rule pid=1826 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:43:03.025000 audit[1826]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=ffffdb986350 a2=0 a3=1 items=0 ppid=1784 pid=1826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:03.025000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 9 09:43:03.028212 kubelet[1784]: I0209 09:43:03.028186 1784 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 09:43:03.027000 audit[1784]: AVC avc: denied { mac_admin } for pid=1784 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:03.027000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 09:43:03.027000 audit[1784]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=400133ec30 a1=4001325590 a2=400133ec00 a3=25 items=0 ppid=1 pid=1784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:03.027000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 09:43:03.027000 audit[1828]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_rule pid=1828 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:43:03.027000 audit[1828]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=540 a0=3 a1=ffffc2987540 a2=0 a3=1 items=0 ppid=1784 pid=1828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:03.027000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 9 09:43:03.029542 kubelet[1784]: I0209 09:43:03.029478 1784 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 09:43:03.029629 kubelet[1784]: I0209 09:43:03.029608 1784 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 9 09:43:03.029869 kubelet[1784]: I0209 09:43:03.029852 1784 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 09:43:03.028000 audit[1829]: NETFILTER_CFG table=mangle:41 family=10 entries=2 op=nft_register_chain pid=1829 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:43:03.028000 audit[1829]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe13a0790 a2=0 a3=1 items=0 ppid=1784 pid=1829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:03.028000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 9 09:43:03.029000 audit[1830]: NETFILTER_CFG table=mangle:42 family=2 entries=1 op=nft_register_chain pid=1830 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:43:03.029000 audit[1830]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc1c9cc00 a2=0 a3=1 items=0 ppid=1784 pid=1830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:03.029000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 9 09:43:03.029000 audit[1831]: NETFILTER_CFG table=nat:43 family=10 entries=2 op=nft_register_chain pid=1831 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:43:03.029000 audit[1831]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffd6fdf6c0 a2=0 a3=1 items=0 ppid=1784 pid=1831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:03.029000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 9 09:43:03.031773 kubelet[1784]: E0209 09:43:03.031754 1784 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 9 09:43:03.030000 audit[1832]: NETFILTER_CFG table=nat:44 family=2 entries=1 op=nft_register_chain pid=1832 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:43:03.030000 audit[1832]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff10f52f0 a2=0 a3=1 items=0 ppid=1784 pid=1832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:03.030000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 9 09:43:03.031000 audit[1834]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_chain pid=1834 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:43:03.031000 audit[1834]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffee82e260 a2=0 a3=1 items=0 ppid=1784 pid=1834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:03.031000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 9 09:43:03.031000 audit[1835]: NETFILTER_CFG table=nat:46 family=10 entries=1 op=nft_register_rule pid=1835 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:43:03.031000 audit[1835]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffc1720320 a2=0 a3=1 items=0 ppid=1784 pid=1835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:03.031000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 9 09:43:03.032000 audit[1836]: NETFILTER_CFG table=filter:47 family=10 entries=2 op=nft_register_chain pid=1836 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:43:03.032000 audit[1836]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=ffffea260530 a2=0 a3=1 items=0 ppid=1784 pid=1836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:03.032000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 9 09:43:03.034000 audit[1838]: NETFILTER_CFG table=filter:48 family=10 entries=1 op=nft_register_rule pid=1838 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:43:03.034000 audit[1838]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=ffffc9218e50 a2=0 a3=1 items=0 ppid=1784 pid=1838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:03.034000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 9 09:43:03.035000 audit[1839]: NETFILTER_CFG table=nat:49 family=10 entries=1 op=nft_register_chain pid=1839 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:43:03.035000 audit[1839]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffee1c3fc0 a2=0 a3=1 items=0 ppid=1784 pid=1839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:03.035000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 9 09:43:03.036000 audit[1840]: NETFILTER_CFG table=nat:50 family=10 entries=1 op=nft_register_chain pid=1840 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:43:03.036000 audit[1840]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffeea1b240 a2=0 a3=1 items=0 ppid=1784 pid=1840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:03.036000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 9 09:43:03.038000 audit[1842]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_rule pid=1842 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:43:03.038000 audit[1842]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffea8d0770 a2=0 a3=1 items=0 ppid=1784 pid=1842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:03.038000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 9 09:43:03.040000 audit[1844]: NETFILTER_CFG table=nat:52 family=10 entries=2 op=nft_register_chain pid=1844 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:43:03.040000 audit[1844]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffde883fe0 a2=0 a3=1 items=0 ppid=1784 pid=1844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:03.040000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 9 09:43:03.042000 audit[1846]: NETFILTER_CFG table=nat:53 family=10 entries=1 op=nft_register_rule pid=1846 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:43:03.042000 audit[1846]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=ffffc5966300 a2=0 a3=1 items=0 ppid=1784 pid=1846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:03.042000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 9 09:43:03.044000 audit[1848]: NETFILTER_CFG table=nat:54 family=10 entries=1 op=nft_register_rule pid=1848 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:43:03.044000 audit[1848]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=ffffe0a2b7d0 a2=0 a3=1 items=0 ppid=1784 pid=1848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:03.044000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 9 09:43:03.048000 audit[1850]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_rule pid=1850 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:43:03.048000 audit[1850]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=556 a0=3 a1=ffffcc3090c0 a2=0 a3=1 items=0 ppid=1784 pid=1850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:03.048000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 9 09:43:03.050648 kubelet[1784]: I0209 09:43:03.050625 1784 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 09:43:03.050648 kubelet[1784]: I0209 09:43:03.050648 1784 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 09:43:03.050727 kubelet[1784]: I0209 09:43:03.050667 1784 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 09:43:03.050727 kubelet[1784]: E0209 09:43:03.050712 1784 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 09:43:03.051519 kubelet[1784]: W0209 09:43:03.051487 1784 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.11:6443: connect: connection refused Feb 9 09:43:03.051565 kubelet[1784]: E0209 09:43:03.051525 1784 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.11:6443: connect: connection refused Feb 9 09:43:03.050000 audit[1851]: NETFILTER_CFG table=mangle:56 family=10 entries=1 op=nft_register_chain pid=1851 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:43:03.050000 audit[1851]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd8f11ba0 a2=0 a3=1 items=0 ppid=1784 pid=1851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:03.050000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 9 09:43:03.051000 audit[1852]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=1852 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:43:03.051000 audit[1852]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffb5aef50 a2=0 a3=1 items=0 ppid=1784 pid=1852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:03.051000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 9 09:43:03.052000 audit[1853]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=1853 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:43:03.052000 audit[1853]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe11d0ca0 a2=0 a3=1 items=0 ppid=1784 pid=1853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:03.052000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 9 09:43:03.087063 kubelet[1784]: I0209 09:43:03.087028 1784 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 09:43:03.087436 kubelet[1784]: E0209 09:43:03.087414 1784 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.11:6443/api/v1/nodes\": dial tcp 10.0.0.11:6443: connect: connection refused" node="localhost" Feb 9 09:43:03.151596 kubelet[1784]: I0209 09:43:03.151566 1784 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:43:03.152739 kubelet[1784]: I0209 09:43:03.152704 1784 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:43:03.153453 kubelet[1784]: I0209 09:43:03.153430 1784 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:43:03.154571 kubelet[1784]: I0209 09:43:03.154538 1784 status_manager.go:698] "Failed to get status for pod" podUID=72ae17a74a2eae76daac6d298477aff0 pod="kube-system/kube-scheduler-localhost" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" Feb 9 09:43:03.154769 kubelet[1784]: I0209 09:43:03.154737 1784 status_manager.go:698] "Failed to get status for pod" podUID=75f335a719df4b83cfc2234b5d722c39 pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" Feb 9 09:43:03.155727 kubelet[1784]: I0209 09:43:03.155703 1784 status_manager.go:698] "Failed to get status for pod" podUID=550020dd9f101bcc23e1d3c651841c4d pod="kube-system/kube-controller-manager-localhost" err="Get \"https://10.0.0.11:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.11:6443: connect: connection refused" Feb 9 09:43:03.188589 kubelet[1784]: E0209 09:43:03.188544 1784 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.11:6443: connect: connection refused Feb 9 09:43:03.286900 kubelet[1784]: I0209 09:43:03.286806 1784 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/75f335a719df4b83cfc2234b5d722c39-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"75f335a719df4b83cfc2234b5d722c39\") " pod="kube-system/kube-apiserver-localhost" Feb 9 09:43:03.286900 kubelet[1784]: I0209 09:43:03.286846 1784 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:43:03.286900 kubelet[1784]: I0209 09:43:03.286874 1784 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:43:03.286900 kubelet[1784]: I0209 09:43:03.286899 1784 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:43:03.287035 kubelet[1784]: I0209 09:43:03.286922 1784 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:43:03.287035 kubelet[1784]: I0209 09:43:03.286942 1784 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 9 09:43:03.287035 kubelet[1784]: I0209 09:43:03.286993 1784 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/75f335a719df4b83cfc2234b5d722c39-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"75f335a719df4b83cfc2234b5d722c39\") " pod="kube-system/kube-apiserver-localhost" Feb 9 09:43:03.287099 kubelet[1784]: I0209 09:43:03.287043 1784 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/75f335a719df4b83cfc2234b5d722c39-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"75f335a719df4b83cfc2234b5d722c39\") " pod="kube-system/kube-apiserver-localhost" Feb 9 09:43:03.287099 kubelet[1784]: I0209 09:43:03.287070 1784 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:43:03.288968 kubelet[1784]: I0209 09:43:03.288897 1784 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 09:43:03.289264 kubelet[1784]: E0209 09:43:03.289249 1784 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.11:6443/api/v1/nodes\": dial tcp 10.0.0.11:6443: connect: connection refused" node="localhost" Feb 9 09:43:03.456828 kubelet[1784]: E0209 09:43:03.456801 1784 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:03.457633 env[1222]: time="2024-02-09T09:43:03.457583291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,}" Feb 9 09:43:03.459771 kubelet[1784]: E0209 09:43:03.459737 1784 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:03.460279 env[1222]: time="2024-02-09T09:43:03.460067369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:75f335a719df4b83cfc2234b5d722c39,Namespace:kube-system,Attempt:0,}" Feb 9 09:43:03.460590 kubelet[1784]: E0209 09:43:03.460573 1784 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:03.461004 env[1222]: time="2024-02-09T09:43:03.460976602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,}" Feb 9 09:43:03.589456 kubelet[1784]: E0209 09:43:03.589351 1784 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.11:6443: connect: connection refused Feb 9 09:43:03.690628 kubelet[1784]: I0209 09:43:03.690585 1784 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 09:43:03.690939 kubelet[1784]: E0209 09:43:03.690920 1784 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.11:6443/api/v1/nodes\": dial tcp 10.0.0.11:6443: connect: connection refused" node="localhost" Feb 9 09:43:03.789716 kubelet[1784]: W0209 09:43:03.789651 1784 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.11:6443: connect: connection refused Feb 9 09:43:03.789716 kubelet[1784]: E0209 09:43:03.789713 1784 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.11:6443: connect: connection refused Feb 9 09:43:03.793079 kubelet[1784]: W0209 09:43:03.793036 1784 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.11:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.11:6443: connect: connection refused Feb 9 09:43:03.793181 kubelet[1784]: E0209 09:43:03.793169 1784 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.11:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.11:6443: connect: connection refused Feb 9 09:43:04.062472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3958059847.mount: Deactivated successfully. Feb 9 09:43:04.065194 env[1222]: time="2024-02-09T09:43:04.065144651Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:04.067079 env[1222]: time="2024-02-09T09:43:04.067046699Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:04.067964 env[1222]: time="2024-02-09T09:43:04.067936825Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:04.069399 env[1222]: time="2024-02-09T09:43:04.069372478Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:04.070020 env[1222]: time="2024-02-09T09:43:04.069995978Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:04.072498 env[1222]: time="2024-02-09T09:43:04.072470864Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:04.073228 env[1222]: time="2024-02-09T09:43:04.073198567Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:04.076283 env[1222]: time="2024-02-09T09:43:04.076246850Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:04.078591 env[1222]: time="2024-02-09T09:43:04.078558714Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:04.079333 env[1222]: time="2024-02-09T09:43:04.079293294Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:04.081159 env[1222]: time="2024-02-09T09:43:04.081115891Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:04.082041 env[1222]: time="2024-02-09T09:43:04.082011934Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:04.109392 env[1222]: time="2024-02-09T09:43:04.109319330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:43:04.109392 env[1222]: time="2024-02-09T09:43:04.109361636Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:43:04.109540 env[1222]: time="2024-02-09T09:43:04.109373351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:43:04.111049 env[1222]: time="2024-02-09T09:43:04.110999857Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fc91fc420a136570dbb2705407811ee5c3c28b8cb29ba5f36e08e4efe117f146 pid=1869 runtime=io.containerd.runc.v2 Feb 9 09:43:04.113017 env[1222]: time="2024-02-09T09:43:04.112958245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:43:04.113017 env[1222]: time="2024-02-09T09:43:04.112994712Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:43:04.113017 env[1222]: time="2024-02-09T09:43:04.113005749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:43:04.113207 env[1222]: time="2024-02-09T09:43:04.113156335Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e3b01af152330605f6377555a699fd960e0720518e17cddbcd7b2256602e53f0 pid=1875 runtime=io.containerd.runc.v2 Feb 9 09:43:04.118830 env[1222]: time="2024-02-09T09:43:04.116174390Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:43:04.118830 env[1222]: time="2024-02-09T09:43:04.116209137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:43:04.118830 env[1222]: time="2024-02-09T09:43:04.116219654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:43:04.118830 env[1222]: time="2024-02-09T09:43:04.116388154Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/420f6f2244168f2633cb4ad3ff9b014489e4c12b4966b92037aa6a6b4b7ba412 pid=1893 runtime=io.containerd.runc.v2 Feb 9 09:43:04.183437 env[1222]: time="2024-02-09T09:43:04.183390292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:75f335a719df4b83cfc2234b5d722c39,Namespace:kube-system,Attempt:0,} returns sandbox id \"420f6f2244168f2633cb4ad3ff9b014489e4c12b4966b92037aa6a6b4b7ba412\"" Feb 9 09:43:04.188960 env[1222]: time="2024-02-09T09:43:04.188920939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3b01af152330605f6377555a699fd960e0720518e17cddbcd7b2256602e53f0\"" Feb 9 09:43:04.189460 kubelet[1784]: E0209 09:43:04.189439 1784 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:04.189986 kubelet[1784]: E0209 09:43:04.189971 1784 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:04.192707 env[1222]: time="2024-02-09T09:43:04.192668416Z" level=info msg="CreateContainer within sandbox \"420f6f2244168f2633cb4ad3ff9b014489e4c12b4966b92037aa6a6b4b7ba412\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 09:43:04.193686 env[1222]: time="2024-02-09T09:43:04.193652388Z" level=info msg="CreateContainer within sandbox \"e3b01af152330605f6377555a699fd960e0720518e17cddbcd7b2256602e53f0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 09:43:04.196098 env[1222]: time="2024-02-09T09:43:04.196061417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc91fc420a136570dbb2705407811ee5c3c28b8cb29ba5f36e08e4efe117f146\"" Feb 9 09:43:04.196533 kubelet[1784]: E0209 09:43:04.196519 1784 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:04.198259 env[1222]: time="2024-02-09T09:43:04.198229772Z" level=info msg="CreateContainer within sandbox \"fc91fc420a136570dbb2705407811ee5c3c28b8cb29ba5f36e08e4efe117f146\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 09:43:04.210004 env[1222]: time="2024-02-09T09:43:04.209955031Z" level=info msg="CreateContainer within sandbox \"420f6f2244168f2633cb4ad3ff9b014489e4c12b4966b92037aa6a6b4b7ba412\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"78548ea683a92539625f9dab4c0a0b4ed1db596b5afd460353fafffba75d4916\"" Feb 9 09:43:04.210716 env[1222]: time="2024-02-09T09:43:04.210684773Z" level=info msg="StartContainer for \"78548ea683a92539625f9dab4c0a0b4ed1db596b5afd460353fafffba75d4916\"" Feb 9 09:43:04.216321 env[1222]: time="2024-02-09T09:43:04.216264642Z" level=info msg="CreateContainer within sandbox \"fc91fc420a136570dbb2705407811ee5c3c28b8cb29ba5f36e08e4efe117f146\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9ddad0eaaf7ea8f2003416ee03c2d8afff48658472bdb759af1c4bd230732cae\"" Feb 9 09:43:04.216806 env[1222]: time="2024-02-09T09:43:04.216781340Z" level=info msg="StartContainer for \"9ddad0eaaf7ea8f2003416ee03c2d8afff48658472bdb759af1c4bd230732cae\"" Feb 9 09:43:04.218268 env[1222]: time="2024-02-09T09:43:04.218227749Z" level=info msg="CreateContainer within sandbox \"e3b01af152330605f6377555a699fd960e0720518e17cddbcd7b2256602e53f0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9a0c9b909aa09386b677e0aad93cdb41616aefbc7133a18a7923a7e5ed331a83\"" Feb 9 09:43:04.218671 env[1222]: time="2024-02-09T09:43:04.218625689Z" level=info msg="StartContainer for \"9a0c9b909aa09386b677e0aad93cdb41616aefbc7133a18a7923a7e5ed331a83\"" Feb 9 09:43:04.233609 kubelet[1784]: W0209 09:43:04.233540 1784 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.11:6443: connect: connection refused Feb 9 09:43:04.233609 kubelet[1784]: E0209 09:43:04.233606 1784 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.11:6443: connect: connection refused Feb 9 09:43:04.301251 kubelet[1784]: W0209 09:43:04.300533 1784 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.11:6443: connect: connection refused Feb 9 09:43:04.301251 kubelet[1784]: E0209 09:43:04.300597 1784 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.11:6443: connect: connection refused Feb 9 09:43:04.318755 env[1222]: time="2024-02-09T09:43:04.318662400Z" level=info msg="StartContainer for \"9a0c9b909aa09386b677e0aad93cdb41616aefbc7133a18a7923a7e5ed331a83\" returns successfully" Feb 9 09:43:04.321608 env[1222]: time="2024-02-09T09:43:04.321578251Z" level=info msg="StartContainer for \"9ddad0eaaf7ea8f2003416ee03c2d8afff48658472bdb759af1c4bd230732cae\" returns successfully" Feb 9 09:43:04.357441 env[1222]: time="2024-02-09T09:43:04.357399760Z" level=info msg="StartContainer for \"78548ea683a92539625f9dab4c0a0b4ed1db596b5afd460353fafffba75d4916\" returns successfully" Feb 9 09:43:04.393038 kubelet[1784]: E0209 09:43:04.389817 1784 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.11:6443: connect: connection refused Feb 9 09:43:04.492614 kubelet[1784]: I0209 09:43:04.492581 1784 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 09:43:04.492915 kubelet[1784]: E0209 09:43:04.492894 1784 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.11:6443/api/v1/nodes\": dial tcp 10.0.0.11:6443: connect: connection refused" node="localhost" Feb 9 09:43:05.057334 kubelet[1784]: E0209 09:43:05.055987 1784 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:05.061685 kubelet[1784]: E0209 09:43:05.061664 1784 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:05.063561 kubelet[1784]: E0209 09:43:05.063540 1784 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:06.065849 kubelet[1784]: E0209 09:43:06.065821 1784 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:06.066657 kubelet[1784]: E0209 09:43:06.066594 1784 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:06.066925 kubelet[1784]: E0209 09:43:06.066908 1784 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:06.094133 kubelet[1784]: I0209 09:43:06.094108 1784 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 09:43:06.730594 kubelet[1784]: E0209 09:43:06.730563 1784 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 9 09:43:06.792376 kubelet[1784]: I0209 09:43:06.792341 1784 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 09:43:06.793780 kubelet[1784]: E0209 09:43:06.793758 1784 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Feb 9 09:43:06.820975 kubelet[1784]: E0209 09:43:06.820887 1784 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2288a4074d3de", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 43, 2, 978835422, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 43, 2, 978835422, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 09:43:06.876110 kubelet[1784]: E0209 09:43:06.876016 1784 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2288a408b6d02", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 43, 2, 980316418, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 43, 2, 980316418, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 09:43:06.929784 kubelet[1784]: E0209 09:43:06.929683 1784 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2288a42dd806d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node localhost status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 43, 3, 19249773, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 43, 3, 19249773, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 09:43:06.979349 kubelet[1784]: I0209 09:43:06.979288 1784 apiserver.go:52] "Watching apiserver" Feb 9 09:43:06.982682 kubelet[1784]: E0209 09:43:06.982547 1784 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2288a42ddae08", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node localhost status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 43, 3, 19261448, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 43, 3, 19261448, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 09:43:07.036174 kubelet[1784]: E0209 09:43:07.036086 1784 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2288a42ddbd7e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node localhost status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 43, 3, 19265406, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 43, 3, 19265406, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 09:43:07.089402 kubelet[1784]: E0209 09:43:07.089289 1784 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2288a4387d873", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 43, 3, 30413427, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 43, 3, 30413427, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 09:43:07.145170 kubelet[1784]: E0209 09:43:07.145082 1784 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2288a42dd806d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node localhost status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 43, 3, 19249773, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 43, 3, 86996029, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 09:43:07.201734 kubelet[1784]: E0209 09:43:07.201629 1784 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2288a42ddae08", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node localhost status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 43, 3, 19261448, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 43, 3, 87002146, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 09:43:07.257228 kubelet[1784]: E0209 09:43:07.257068 1784 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2288a42ddbd7e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node localhost status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 43, 3, 19265406, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 43, 3, 87004905, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 09:43:07.386116 kubelet[1784]: I0209 09:43:07.386067 1784 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 09:43:07.408282 kubelet[1784]: I0209 09:43:07.408241 1784 reconciler.go:41] "Reconciler: start to sync state" Feb 9 09:43:07.476068 kubelet[1784]: E0209 09:43:07.475978 1784 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2288a42dd806d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node localhost status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 43, 3, 19249773, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 43, 3, 152629937, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 09:43:07.581646 kubelet[1784]: E0209 09:43:07.581540 1784 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Feb 9 09:43:07.582019 kubelet[1784]: E0209 09:43:07.581947 1784 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:07.828649 kubelet[1784]: E0209 09:43:07.828611 1784 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:07.944005 kubelet[1784]: E0209 09:43:07.943773 1784 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2288a42ddae08", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node localhost status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 43, 3, 19261448, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 43, 3, 152641853, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 09:43:07.981389 kubelet[1784]: E0209 09:43:07.981356 1784 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:08.067180 kubelet[1784]: E0209 09:43:08.067151 1784 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:08.067461 kubelet[1784]: E0209 09:43:08.067440 1784 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:08.275145 kubelet[1784]: E0209 09:43:08.274986 1784 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2288a42ddbd7e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node localhost status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 43, 3, 19265406, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 43, 3, 152645011, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 09:43:09.474908 systemd[1]: Reloading. Feb 9 09:43:09.521660 /usr/lib/systemd/system-generators/torcx-generator[2118]: time="2024-02-09T09:43:09Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:43:09.521704 /usr/lib/systemd/system-generators/torcx-generator[2118]: time="2024-02-09T09:43:09Z" level=info msg="torcx already run" Feb 9 09:43:09.581986 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:43:09.582005 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:43:09.598825 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:43:09.684533 systemd[1]: Stopping kubelet.service... Feb 9 09:43:09.700620 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 09:43:09.700971 systemd[1]: Stopped kubelet.service. Feb 9 09:43:09.703363 kernel: kauditd_printk_skb: 104 callbacks suppressed Feb 9 09:43:09.703401 kernel: audit: type=1131 audit(1707471789.699:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:09.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:09.703246 systemd[1]: Started kubelet.service. Feb 9 09:43:09.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:09.706321 kernel: audit: type=1130 audit(1707471789.702:219): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:09.754813 kubelet[2163]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:43:09.755128 kubelet[2163]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:43:09.755261 kubelet[2163]: I0209 09:43:09.755228 2163 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 09:43:09.756604 kubelet[2163]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:43:09.756689 kubelet[2163]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:43:09.759592 kubelet[2163]: I0209 09:43:09.759561 2163 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 09:43:09.759592 kubelet[2163]: I0209 09:43:09.759587 2163 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 09:43:09.759793 kubelet[2163]: I0209 09:43:09.759768 2163 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 09:43:09.761075 kubelet[2163]: I0209 09:43:09.761053 2163 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 09:43:09.764203 kubelet[2163]: W0209 09:43:09.764177 2163 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 09:43:09.764591 kubelet[2163]: I0209 09:43:09.764561 2163 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:43:09.765215 kubelet[2163]: I0209 09:43:09.765187 2163 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 09:43:09.765604 kubelet[2163]: I0209 09:43:09.765585 2163 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 09:43:09.765671 kubelet[2163]: I0209 09:43:09.765655 2163 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 09:43:09.765745 kubelet[2163]: I0209 09:43:09.765675 2163 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 09:43:09.765745 kubelet[2163]: I0209 09:43:09.765687 2163 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 09:43:09.765745 kubelet[2163]: I0209 09:43:09.765713 2163 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:43:09.768593 kubelet[2163]: I0209 09:43:09.768566 2163 kubelet.go:398] "Attempting to sync node with API server" Feb 9 09:43:09.768593 kubelet[2163]: I0209 09:43:09.768592 2163 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 09:43:09.768658 kubelet[2163]: I0209 09:43:09.768615 2163 kubelet.go:297] "Adding apiserver pod source" Feb 9 09:43:09.768658 kubelet[2163]: I0209 09:43:09.768626 2163 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 09:43:09.769479 kubelet[2163]: I0209 09:43:09.769458 2163 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 09:43:09.769937 kubelet[2163]: I0209 09:43:09.769911 2163 server.go:1186] "Started kubelet" Feb 9 09:43:09.770188 kubelet[2163]: I0209 09:43:09.770162 2163 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 09:43:09.770865 kubelet[2163]: I0209 09:43:09.770839 2163 server.go:451] "Adding debug handlers to kubelet server" Feb 9 09:43:09.769000 audit[2163]: AVC avc: denied { mac_admin } for pid=2163 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:09.773320 kernel: audit: type=1400 audit(1707471789.769:220): avc: denied { mac_admin } for pid=2163 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:09.769000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 09:43:09.773649 kubelet[2163]: E0209 09:43:09.773625 2163 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 09:43:09.773649 kubelet[2163]: E0209 09:43:09.773647 2163 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 09:43:09.769000 audit[2163]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000a67b00 a1=4000dfe918 a2=4000a67ad0 a3=25 items=0 ppid=1 pid=2163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:09.774781 kubelet[2163]: I0209 09:43:09.774764 2163 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 9 09:43:09.774870 kubelet[2163]: I0209 09:43:09.774860 2163 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 9 09:43:09.774938 kubelet[2163]: I0209 09:43:09.774929 2163 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 09:43:09.776103 kubelet[2163]: I0209 09:43:09.776084 2163 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 09:43:09.776256 kubelet[2163]: I0209 09:43:09.776242 2163 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 09:43:09.783273 kernel: audit: type=1401 audit(1707471789.769:220): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 09:43:09.783348 kernel: audit: type=1300 audit(1707471789.769:220): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000a67b00 a1=4000dfe918 a2=4000a67ad0 a3=25 items=0 ppid=1 pid=2163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:09.783367 kernel: audit: type=1327 audit(1707471789.769:220): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 09:43:09.783382 kernel: audit: type=1400 audit(1707471789.773:221): avc: denied { mac_admin } for pid=2163 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:09.769000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 09:43:09.773000 audit[2163]: AVC avc: denied { mac_admin } for pid=2163 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:09.773000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 09:43:09.784994 kernel: audit: type=1401 audit(1707471789.773:221): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 09:43:09.773000 audit[2163]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000d008c0 a1=4000dfe930 a2=4000a67b90 a3=25 items=0 ppid=1 pid=2163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:09.789162 kernel: audit: type=1300 audit(1707471789.773:221): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000d008c0 a1=4000dfe930 a2=4000a67b90 a3=25 items=0 ppid=1 pid=2163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:09.773000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 09:43:09.792509 kernel: audit: type=1327 audit(1707471789.773:221): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 09:43:09.833838 kubelet[2163]: I0209 09:43:09.833803 2163 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 09:43:09.848533 kubelet[2163]: I0209 09:43:09.847660 2163 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 09:43:09.848533 kubelet[2163]: I0209 09:43:09.847687 2163 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 09:43:09.848533 kubelet[2163]: I0209 09:43:09.847708 2163 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 09:43:09.848533 kubelet[2163]: E0209 09:43:09.847776 2163 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 09:43:09.852309 kubelet[2163]: I0209 09:43:09.852274 2163 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 09:43:09.852427 kubelet[2163]: I0209 09:43:09.852416 2163 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 09:43:09.852503 kubelet[2163]: I0209 09:43:09.852493 2163 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:43:09.852681 kubelet[2163]: I0209 09:43:09.852669 2163 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 09:43:09.852775 kubelet[2163]: I0209 09:43:09.852752 2163 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 09:43:09.852842 kubelet[2163]: I0209 09:43:09.852832 2163 policy_none.go:49] "None policy: Start" Feb 9 09:43:09.853985 kubelet[2163]: I0209 09:43:09.853966 2163 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 09:43:09.854078 kubelet[2163]: I0209 09:43:09.854067 2163 state_mem.go:35] "Initializing new in-memory state store" Feb 9 09:43:09.854256 kubelet[2163]: I0209 09:43:09.854243 2163 state_mem.go:75] "Updated machine memory state" Feb 9 09:43:09.855694 kubelet[2163]: I0209 09:43:09.855671 2163 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 09:43:09.854000 audit[2163]: AVC avc: denied { mac_admin } for pid=2163 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:09.854000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 09:43:09.854000 audit[2163]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40016eb2f0 a1=40016e6e28 a2=40016eb2c0 a3=25 items=0 ppid=1 pid=2163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:09.854000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 09:43:09.856055 kubelet[2163]: I0209 09:43:09.856033 2163 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 9 09:43:09.856296 kubelet[2163]: I0209 09:43:09.856279 2163 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 09:43:09.879008 kubelet[2163]: I0209 09:43:09.878959 2163 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 09:43:09.888539 kubelet[2163]: I0209 09:43:09.888491 2163 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Feb 9 09:43:09.888691 kubelet[2163]: I0209 09:43:09.888585 2163 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 09:43:09.948349 kubelet[2163]: I0209 09:43:09.948314 2163 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:43:09.948497 kubelet[2163]: I0209 09:43:09.948407 2163 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:43:09.948497 kubelet[2163]: I0209 09:43:09.948441 2163 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:43:09.953239 kubelet[2163]: E0209 09:43:09.953208 2163 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 9 09:43:09.953454 kubelet[2163]: E0209 09:43:09.953433 2163 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 9 09:43:10.077468 kubelet[2163]: I0209 09:43:10.077364 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:43:10.077468 kubelet[2163]: I0209 09:43:10.077410 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 9 09:43:10.077468 kubelet[2163]: I0209 09:43:10.077435 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/75f335a719df4b83cfc2234b5d722c39-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"75f335a719df4b83cfc2234b5d722c39\") " pod="kube-system/kube-apiserver-localhost" Feb 9 09:43:10.077468 kubelet[2163]: I0209 09:43:10.077456 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/75f335a719df4b83cfc2234b5d722c39-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"75f335a719df4b83cfc2234b5d722c39\") " pod="kube-system/kube-apiserver-localhost" Feb 9 09:43:10.077647 kubelet[2163]: I0209 09:43:10.077480 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/75f335a719df4b83cfc2234b5d722c39-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"75f335a719df4b83cfc2234b5d722c39\") " pod="kube-system/kube-apiserver-localhost" Feb 9 09:43:10.077647 kubelet[2163]: I0209 09:43:10.077503 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:43:10.077647 kubelet[2163]: I0209 09:43:10.077523 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:43:10.077647 kubelet[2163]: I0209 09:43:10.077543 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:43:10.077647 kubelet[2163]: I0209 09:43:10.077600 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 09:43:10.254497 kubelet[2163]: E0209 09:43:10.254465 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:10.254815 kubelet[2163]: E0209 09:43:10.254785 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:10.274435 kubelet[2163]: E0209 09:43:10.274411 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:10.769669 kubelet[2163]: I0209 09:43:10.769636 2163 apiserver.go:52] "Watching apiserver" Feb 9 09:43:10.977073 kubelet[2163]: I0209 09:43:10.977043 2163 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 09:43:10.982485 kubelet[2163]: I0209 09:43:10.982447 2163 reconciler.go:41] "Reconciler: start to sync state" Feb 9 09:43:11.374239 kubelet[2163]: E0209 09:43:11.374205 2163 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 9 09:43:11.374850 kubelet[2163]: E0209 09:43:11.374836 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:11.582617 kubelet[2163]: E0209 09:43:11.582583 2163 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 9 09:43:11.583055 kubelet[2163]: E0209 09:43:11.583040 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:11.774425 kubelet[2163]: E0209 09:43:11.774395 2163 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 9 09:43:11.775259 kubelet[2163]: E0209 09:43:11.775243 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:11.861551 kubelet[2163]: E0209 09:43:11.861517 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:11.861701 kubelet[2163]: E0209 09:43:11.861585 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:11.861931 kubelet[2163]: E0209 09:43:11.861912 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:11.982219 kubelet[2163]: I0209 09:43:11.982187 2163 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.982137436 pod.CreationTimestamp="2024-02-09 09:43:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:43:11.981680693 +0000 UTC m=+2.274461104" watchObservedRunningTime="2024-02-09 09:43:11.982137436 +0000 UTC m=+2.274917847" Feb 9 09:43:12.374949 kubelet[2163]: I0209 09:43:12.374918 2163 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.374882587 pod.CreationTimestamp="2024-02-09 09:43:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:43:12.374551719 +0000 UTC m=+2.667332130" watchObservedRunningTime="2024-02-09 09:43:12.374882587 +0000 UTC m=+2.667662998" Feb 9 09:43:12.774025 kubelet[2163]: I0209 09:43:12.773982 2163 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=5.7739472339999995 pod.CreationTimestamp="2024-02-09 09:43:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:43:12.773709882 +0000 UTC m=+3.066490293" watchObservedRunningTime="2024-02-09 09:43:12.773947234 +0000 UTC m=+3.066727645" Feb 9 09:43:13.052311 kubelet[2163]: E0209 09:43:13.052190 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:13.335504 kubelet[2163]: E0209 09:43:13.335397 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:14.916450 kubelet[2163]: E0209 09:43:14.916410 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:15.167000 audit[1377]: USER_END pid=1377 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:43:15.169138 sudo[1377]: pam_unix(sudo:session): session closed for user root Feb 9 09:43:15.169750 kernel: kauditd_printk_skb: 4 callbacks suppressed Feb 9 09:43:15.169788 kernel: audit: type=1106 audit(1707471795.167:223): pid=1377 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:43:15.171141 sshd[1371]: pam_unix(sshd:session): session closed for user core Feb 9 09:43:15.168000 audit[1377]: CRED_DISP pid=1377 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:43:15.173769 systemd[1]: sshd@6-10.0.0.11:22-10.0.0.1:46612.service: Deactivated successfully. Feb 9 09:43:15.174072 kernel: audit: type=1104 audit(1707471795.168:224): pid=1377 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:43:15.174098 kernel: audit: type=1106 audit(1707471795.170:225): pid=1371 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:43:15.170000 audit[1371]: USER_END pid=1371 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:43:15.174622 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 09:43:15.174993 systemd-logind[1203]: Session 7 logged out. Waiting for processes to exit. Feb 9 09:43:15.175608 systemd-logind[1203]: Removed session 7. Feb 9 09:43:15.170000 audit[1371]: CRED_DISP pid=1371 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:43:15.178333 kernel: audit: type=1104 audit(1707471795.170:226): pid=1371 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:43:15.178396 kernel: audit: type=1131 audit(1707471795.172:227): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.11:22-10.0.0.1:46612 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:15.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.11:22-10.0.0.1:46612 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:15.867343 kubelet[2163]: E0209 09:43:15.867193 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:22.007832 update_engine[1206]: I0209 09:43:22.007783 1206 update_attempter.cc:509] Updating boot flags... Feb 9 09:43:23.060927 kubelet[2163]: E0209 09:43:23.060695 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:23.260708 kubelet[2163]: I0209 09:43:23.260679 2163 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 09:43:23.261327 env[1222]: time="2024-02-09T09:43:23.261254319Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 09:43:23.261627 kubelet[2163]: I0209 09:43:23.261477 2163 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 09:43:23.343448 kubelet[2163]: E0209 09:43:23.343346 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:24.076130 kubelet[2163]: I0209 09:43:24.076059 2163 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:43:24.188877 kubelet[2163]: I0209 09:43:24.188836 2163 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:43:24.276664 kubelet[2163]: I0209 09:43:24.276621 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d5f36a9c-c52c-4a2c-80c4-40bbb59110b8-kube-proxy\") pod \"kube-proxy-vpqrj\" (UID: \"d5f36a9c-c52c-4a2c-80c4-40bbb59110b8\") " pod="kube-system/kube-proxy-vpqrj" Feb 9 09:43:24.276792 kubelet[2163]: I0209 09:43:24.276685 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5f36a9c-c52c-4a2c-80c4-40bbb59110b8-xtables-lock\") pod \"kube-proxy-vpqrj\" (UID: \"d5f36a9c-c52c-4a2c-80c4-40bbb59110b8\") " pod="kube-system/kube-proxy-vpqrj" Feb 9 09:43:24.276792 kubelet[2163]: I0209 09:43:24.276709 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5f36a9c-c52c-4a2c-80c4-40bbb59110b8-lib-modules\") pod \"kube-proxy-vpqrj\" (UID: \"d5f36a9c-c52c-4a2c-80c4-40bbb59110b8\") " pod="kube-system/kube-proxy-vpqrj" Feb 9 09:43:24.276792 kubelet[2163]: I0209 09:43:24.276731 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnzl7\" (UniqueName: \"kubernetes.io/projected/d5f36a9c-c52c-4a2c-80c4-40bbb59110b8-kube-api-access-lnzl7\") pod \"kube-proxy-vpqrj\" (UID: \"d5f36a9c-c52c-4a2c-80c4-40bbb59110b8\") " pod="kube-system/kube-proxy-vpqrj" Feb 9 09:43:24.377024 kubelet[2163]: I0209 09:43:24.376900 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkfdm\" (UniqueName: \"kubernetes.io/projected/032494ff-d9b7-486d-8495-ba3bfc4e1866-kube-api-access-bkfdm\") pod \"tigera-operator-cfc98749c-86wvj\" (UID: \"032494ff-d9b7-486d-8495-ba3bfc4e1866\") " pod="tigera-operator/tigera-operator-cfc98749c-86wvj" Feb 9 09:43:24.377024 kubelet[2163]: I0209 09:43:24.377007 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/032494ff-d9b7-486d-8495-ba3bfc4e1866-var-lib-calico\") pod \"tigera-operator-cfc98749c-86wvj\" (UID: \"032494ff-d9b7-486d-8495-ba3bfc4e1866\") " pod="tigera-operator/tigera-operator-cfc98749c-86wvj" Feb 9 09:43:24.678718 kubelet[2163]: E0209 09:43:24.678688 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:24.679781 env[1222]: time="2024-02-09T09:43:24.679423223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vpqrj,Uid:d5f36a9c-c52c-4a2c-80c4-40bbb59110b8,Namespace:kube-system,Attempt:0,}" Feb 9 09:43:24.701292 env[1222]: time="2024-02-09T09:43:24.701201452Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:43:24.701292 env[1222]: time="2024-02-09T09:43:24.701244491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:43:24.701292 env[1222]: time="2024-02-09T09:43:24.701268251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:43:24.701457 env[1222]: time="2024-02-09T09:43:24.701426408Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1b658c33ea7b2ca348265e3309a3477f429141625fdc2bbdd404c6dbd259273c pid=2296 runtime=io.containerd.runc.v2 Feb 9 09:43:24.745518 env[1222]: time="2024-02-09T09:43:24.745478017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vpqrj,Uid:d5f36a9c-c52c-4a2c-80c4-40bbb59110b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b658c33ea7b2ca348265e3309a3477f429141625fdc2bbdd404c6dbd259273c\"" Feb 9 09:43:24.746447 kubelet[2163]: E0209 09:43:24.746419 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:24.748406 env[1222]: time="2024-02-09T09:43:24.748364042Z" level=info msg="CreateContainer within sandbox \"1b658c33ea7b2ca348265e3309a3477f429141625fdc2bbdd404c6dbd259273c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 09:43:24.761526 env[1222]: time="2024-02-09T09:43:24.761489115Z" level=info msg="CreateContainer within sandbox \"1b658c33ea7b2ca348265e3309a3477f429141625fdc2bbdd404c6dbd259273c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"01a3923ab5a3c15393cc8cdaa9b946473c4119d05731a46eb5199a28da7ff7ff\"" Feb 9 09:43:24.762042 env[1222]: time="2024-02-09T09:43:24.762018665Z" level=info msg="StartContainer for \"01a3923ab5a3c15393cc8cdaa9b946473c4119d05731a46eb5199a28da7ff7ff\"" Feb 9 09:43:24.791801 env[1222]: time="2024-02-09T09:43:24.791761224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-cfc98749c-86wvj,Uid:032494ff-d9b7-486d-8495-ba3bfc4e1866,Namespace:tigera-operator,Attempt:0,}" Feb 9 09:43:24.811365 env[1222]: time="2024-02-09T09:43:24.811230736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:43:24.811522 env[1222]: time="2024-02-09T09:43:24.811282375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:43:24.811522 env[1222]: time="2024-02-09T09:43:24.811292855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:43:24.811522 env[1222]: time="2024-02-09T09:43:24.811431013Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c94eefb8f942813d718c4c731e9bd977c025a068f878bce286ee1d2cf742b560 pid=2360 runtime=io.containerd.runc.v2 Feb 9 09:43:24.818787 env[1222]: time="2024-02-09T09:43:24.818746475Z" level=info msg="StartContainer for \"01a3923ab5a3c15393cc8cdaa9b946473c4119d05731a46eb5199a28da7ff7ff\" returns successfully" Feb 9 09:43:24.881640 kubelet[2163]: E0209 09:43:24.881232 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:24.887288 env[1222]: time="2024-02-09T09:43:24.887232382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-cfc98749c-86wvj,Uid:032494ff-d9b7-486d-8495-ba3bfc4e1866,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c94eefb8f942813d718c4c731e9bd977c025a068f878bce286ee1d2cf742b560\"" Feb 9 09:43:24.888772 env[1222]: time="2024-02-09T09:43:24.888736794Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.32.3\"" Feb 9 09:43:24.937000 audit[2426]: NETFILTER_CFG table=mangle:59 family=2 entries=1 op=nft_register_chain pid=2426 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:43:24.937000 audit[2426]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd3974150 a2=0 a3=ffff921096c0 items=0 ppid=2347 pid=2426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:24.942197 kernel: audit: type=1325 audit(1707471804.937:228): table=mangle:59 family=2 entries=1 op=nft_register_chain pid=2426 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:43:24.942282 kernel: audit: type=1300 audit(1707471804.937:228): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd3974150 a2=0 a3=ffff921096c0 items=0 ppid=2347 pid=2426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:24.942332 kernel: audit: type=1327 audit(1707471804.937:228): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 09:43:24.937000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 09:43:24.937000 audit[2427]: NETFILTER_CFG table=mangle:60 family=10 entries=1 op=nft_register_chain pid=2427 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:43:24.945371 kernel: audit: type=1325 audit(1707471804.937:229): table=mangle:60 family=10 entries=1 op=nft_register_chain pid=2427 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:43:24.937000 audit[2427]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc7ec5400 a2=0 a3=ffffbb98f6c0 items=0 ppid=2347 pid=2427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:24.948962 kernel: audit: type=1300 audit(1707471804.937:229): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc7ec5400 a2=0 a3=ffffbb98f6c0 items=0 ppid=2347 pid=2427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:24.949042 kernel: audit: type=1327 audit(1707471804.937:229): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 09:43:24.937000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 09:43:24.938000 audit[2428]: NETFILTER_CFG table=nat:61 family=10 entries=1 op=nft_register_chain pid=2428 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:43:24.955229 kernel: audit: type=1325 audit(1707471804.938:230): table=nat:61 family=10 entries=1 op=nft_register_chain pid=2428 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:43:24.955282 kernel: audit: type=1300 audit(1707471804.938:230): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe2070160 a2=0 a3=ffff93dec6c0 items=0 ppid=2347 pid=2428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:24.938000 audit[2428]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe2070160 a2=0 a3=ffff93dec6c0 items=0 ppid=2347 pid=2428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:24.938000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 09:43:24.959812 kernel: audit: type=1327 audit(1707471804.938:230): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 09:43:24.959867 kernel: audit: type=1325 audit(1707471804.940:231): table=filter:62 family=10 entries=1 op=nft_register_chain pid=2431 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:43:24.940000 audit[2431]: NETFILTER_CFG table=filter:62 family=10 entries=1 op=nft_register_chain pid=2431 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:43:24.940000 audit[2431]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd4663820 a2=0 a3=ffffa34866c0 items=0 ppid=2347 pid=2431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:24.940000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 09:43:24.940000 audit[2430]: NETFILTER_CFG table=nat:63 family=2 entries=1 op=nft_register_chain pid=2430 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:43:24.940000 audit[2430]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffffb7d950 a2=0 a3=ffffa7cce6c0 items=0 ppid=2347 pid=2430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:24.940000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 09:43:24.941000 audit[2432]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_chain pid=2432 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:43:24.941000 audit[2432]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffddb50860 a2=0 a3=ffffba5a26c0 items=0 ppid=2347 pid=2432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:24.941000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 09:43:25.040000 audit[2433]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=2433 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:43:25.040000 audit[2433]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd6fb6350 a2=0 a3=ffff9dfa66c0 items=0 ppid=2347 pid=2433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:25.040000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 9 09:43:25.043000 audit[2435]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=2435 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:43:25.043000 audit[2435]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff2207950 a2=0 a3=ffffa56926c0 items=0 ppid=2347 pid=2435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:25.043000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Feb 9 09:43:25.046000 audit[2438]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=2438 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:43:25.046000 audit[2438]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc47628b0 a2=0 a3=ffffb922e6c0 items=0 ppid=2347 pid=2438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:25.046000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Feb 9 09:43:25.048000 audit[2439]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=2439 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:43:25.048000 audit[2439]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc48dd2e0 a2=0 a3=ffff858fa6c0 items=0 ppid=2347 pid=2439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:25.048000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 9 09:43:25.051000 audit[2441]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=2441 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:43:25.051000 audit[2441]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd4aad700 a2=0 a3=ffff89c3f6c0 items=0 ppid=2347 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:25.051000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 9 09:43:25.052000 audit[2442]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=2442 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:43:25.052000 audit[2442]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffa583310 a2=0 a3=ffffb2ba96c0 items=0 ppid=2347 pid=2442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:25.052000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 9 09:43:25.055000 audit[2444]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=2444 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:43:25.055000 audit[2444]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff7e84190 a2=0 a3=ffff8927d6c0 items=0 ppid=2347 pid=2444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:25.055000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 9 09:43:25.060000 audit[2447]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=2447 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:43:25.060000 audit[2447]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffffbc10820 a2=0 a3=ffff916016c0 items=0 ppid=2347 pid=2447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:25.060000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Feb 9 09:43:25.061000 audit[2448]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=2448 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:43:25.061000 audit[2448]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdac51e80 a2=0 a3=ffff875876c0 items=0 ppid=2347 pid=2448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:25.061000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 9 09:43:25.064000 audit[2450]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=2450 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:43:25.064000 audit[2450]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcba66560 a2=0 a3=ffff9f3df6c0 items=0 ppid=2347 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:25.064000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 9 09:43:25.065000 audit[2451]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_chain pid=2451 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:43:25.065000 audit[2451]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc6f67360 a2=0 a3=ffffb8d3f6c0 items=0 ppid=2347 pid=2451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:25.065000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 9 09:43:25.070817 kubelet[2163]: I0209 09:43:25.070782 2163 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-vpqrj" podStartSLOduration=1.070736023 pod.CreationTimestamp="2024-02-09 09:43:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:43:25.070190953 +0000 UTC m=+15.362971364" watchObservedRunningTime="2024-02-09 09:43:25.070736023 +0000 UTC m=+15.363516434" Feb 9 09:43:25.071000 audit[2453]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=2453 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:43:25.071000 audit[2453]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe35b0b40 a2=0 a3=ffff975a76c0 items=0 ppid=2347 pid=2453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:25.071000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 09:43:25.074000 audit[2456]: NETFILTER_CFG table=filter:77 family=2 entries=1 op=nft_register_rule pid=2456 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:43:25.074000 audit[2456]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffce18ccb0 a2=0 a3=ffff9903f6c0 items=0 ppid=2347 pid=2456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:25.074000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 09:43:25.079000 audit[2459]: NETFILTER_CFG table=filter:78 family=2 entries=1 op=nft_register_rule pid=2459 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:43:25.079000 audit[2459]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe269b9d0 a2=0 a3=ffff976b86c0 items=0 ppid=2347 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:25.079000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 9 09:43:25.080000 audit[2460]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_chain pid=2460 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:43:25.080000 audit[2460]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffde071830 a2=0 a3=ffff990726c0 items=0 ppid=2347 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:25.080000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 9 09:43:25.082000 audit[2462]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_rule pid=2462 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:43:25.082000 audit[2462]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffff8caa880 a2=0 a3=ffffadc5d6c0 items=0 ppid=2347 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:25.082000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 09:43:25.086000 audit[2465]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=2465 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:43:25.086000 audit[2465]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe4cc5e90 a2=0 a3=ffff9708a6c0 items=0 ppid=2347 pid=2465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:25.086000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 09:43:25.109000 audit[2469]: NETFILTER_CFG table=filter:82 family=2 entries=6 op=nft_register_rule pid=2469 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:43:25.109000 audit[2469]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=fffff2d52840 a2=0 a3=ffffbd35c6c0 items=0 ppid=2347 pid=2469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:25.109000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:43:25.114000 audit[2469]: NETFILTER_CFG table=nat:83 family=2 entries=17 op=nft_register_chain pid=2469 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:43:25.114000 audit[2469]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=fffff2d52840 a2=0 a3=ffffbd35c6c0 items=0 ppid=2347 pid=2469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:25.114000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:43:25.115000 audit[2473]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2473 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:43:25.115000 audit[2473]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff9faeb70 a2=0 a3=ffff927c06c0 items=0 ppid=2347 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:25.115000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 9 09:43:25.117000 audit[2475]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=2475 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:43:25.117000 audit[2475]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffe5f032f0 a2=0 a3=ffff926af6c0 items=0 ppid=2347 pid=2475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:25.117000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Feb 9 09:43:25.121000 audit[2478]: NETFILTER_CFG table=filter:86 family=10 entries=2 op=nft_register_chain pid=2478 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:43:25.121000 audit[2478]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffee934b10 a2=0 a3=ffffa6f9b6c0 items=0 ppid=2347 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:25.121000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Feb 9 09:43:25.122000 audit[2479]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=2479 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:43:25.122000 audit[2479]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd0a6fc90 a2=0 a3=ffff93bcb6c0 items=0 ppid=2347 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:25.122000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 9 09:43:25.124000 audit[2481]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=2481 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:43:25.124000 audit[2481]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd97f4bd0 a2=0 a3=ffffb622f6c0 items=0 ppid=2347 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:25.124000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 9 09:43:25.126000 audit[2482]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=2482 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:43:25.126000 audit[2482]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff5dee2a0 a2=0 a3=ffff89bdd6c0 items=0 ppid=2347 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:25.126000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 9 09:43:25.128000 audit[2484]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=2484 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:43:25.128000 audit[2484]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffcb347da0 a2=0 a3=ffff92bbb6c0 items=0 ppid=2347 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:25.128000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Feb 9 09:43:25.131000 audit[2487]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=2487 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:43:25.131000 audit[2487]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffdb466200 a2=0 a3=ffff8b0a06c0 items=0 ppid=2347 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:25.131000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 9 09:43:25.132000 audit[2488]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=2488 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:43:25.132000 audit[2488]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe6e71990 a2=0 a3=ffffaada76c0 items=0 ppid=2347 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:25.132000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 9 09:43:25.135000 audit[2490]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=2490 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:43:25.135000 audit[2490]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc093c6b0 a2=0 a3=ffffa3c706c0 items=0 ppid=2347 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:25.135000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 9 09:43:25.136000 audit[2491]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=2491 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:43:25.136000 audit[2491]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffb24d500 a2=0 a3=ffffa82356c0 items=0 ppid=2347 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:25.136000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 9 09:43:25.138000 audit[2493]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=2493 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:43:25.138000 audit[2493]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffcb26f50 a2=0 a3=ffffbef7c6c0 items=0 ppid=2347 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:25.138000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 09:43:25.141000 audit[2496]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=2496 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:43:25.141000 audit[2496]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffc2c1220 a2=0 a3=ffffab6246c0 items=0 ppid=2347 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:25.141000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 9 09:43:25.144000 audit[2499]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=2499 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:43:25.144000 audit[2499]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffe0fdaa0 a2=0 a3=ffff850f36c0 items=0 ppid=2347 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:25.144000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Feb 9 09:43:25.146000 audit[2500]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=2500 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:43:25.146000 audit[2500]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc7ecea40 a2=0 a3=ffffad6b26c0 items=0 ppid=2347 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:25.146000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 9 09:43:25.148000 audit[2502]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=2502 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:43:25.148000 audit[2502]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=fffff8d36ee0 a2=0 a3=ffffad4bc6c0 items=0 ppid=2347 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:25.148000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 09:43:25.151000 audit[2505]: NETFILTER_CFG table=nat:100 family=10 entries=2 op=nft_register_chain pid=2505 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:43:25.151000 audit[2505]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=fffff0a237e0 a2=0 a3=ffff9f3186c0 items=0 ppid=2347 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:25.151000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 09:43:25.156000 audit[2509]: NETFILTER_CFG table=filter:101 family=10 entries=3 op=nft_register_rule pid=2509 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 9 09:43:25.156000 audit[2509]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffda47fa70 a2=0 a3=ffff83fde6c0 items=0 ppid=2347 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:25.156000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:43:25.156000 audit[2509]: NETFILTER_CFG table=nat:102 family=10 entries=10 op=nft_register_chain pid=2509 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 9 09:43:25.156000 audit[2509]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1968 a0=3 a1=ffffda47fa70 a2=0 a3=ffff83fde6c0 items=0 ppid=2347 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:25.156000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:43:25.474539 systemd[1]: run-containerd-runc-k8s.io-1b658c33ea7b2ca348265e3309a3477f429141625fdc2bbdd404c6dbd259273c-runc.yTK2DL.mount: Deactivated successfully. Feb 9 09:43:25.824971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1961466399.mount: Deactivated successfully. Feb 9 09:43:25.885715 kubelet[2163]: E0209 09:43:25.885248 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:26.350981 env[1222]: time="2024-02-09T09:43:26.350932298Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:26.352716 env[1222]: time="2024-02-09T09:43:26.352676308Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c7a10ec867a90652f951a6ba5a12efb94165e0a1c9b72167810d1065e57d768f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:26.354797 env[1222]: time="2024-02-09T09:43:26.354769752Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:26.356689 env[1222]: time="2024-02-09T09:43:26.356657240Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:715ac9a30f8a9579e44258af20de354715429e11836b493918e9e1a696e9b028,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:26.357429 env[1222]: time="2024-02-09T09:43:26.357397347Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.32.3\" returns image reference \"sha256:c7a10ec867a90652f951a6ba5a12efb94165e0a1c9b72167810d1065e57d768f\"" Feb 9 09:43:26.359799 env[1222]: time="2024-02-09T09:43:26.359763426Z" level=info msg="CreateContainer within sandbox \"c94eefb8f942813d718c4c731e9bd977c025a068f878bce286ee1d2cf742b560\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 9 09:43:26.369517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4292647242.mount: Deactivated successfully. Feb 9 09:43:26.371923 env[1222]: time="2024-02-09T09:43:26.371879419Z" level=info msg="CreateContainer within sandbox \"c94eefb8f942813d718c4c731e9bd977c025a068f878bce286ee1d2cf742b560\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1ee3ad4268f3e745094d3f7eee2e8284522466751bd212ac32fdf134a1b69c91\"" Feb 9 09:43:26.372428 env[1222]: time="2024-02-09T09:43:26.372399890Z" level=info msg="StartContainer for \"1ee3ad4268f3e745094d3f7eee2e8284522466751bd212ac32fdf134a1b69c91\"" Feb 9 09:43:26.485598 env[1222]: time="2024-02-09T09:43:26.485525430Z" level=info msg="StartContainer for \"1ee3ad4268f3e745094d3f7eee2e8284522466751bd212ac32fdf134a1b69c91\" returns successfully" Feb 9 09:43:26.895312 kubelet[2163]: I0209 09:43:26.895270 2163 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-cfc98749c-86wvj" podStartSLOduration=-9.223372033959545e+09 pod.CreationTimestamp="2024-02-09 09:43:24 +0000 UTC" firstStartedPulling="2024-02-09 09:43:24.888274683 +0000 UTC m=+15.181055094" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:43:26.894987529 +0000 UTC m=+17.187767940" watchObservedRunningTime="2024-02-09 09:43:26.895230085 +0000 UTC m=+17.188010496" Feb 9 09:43:29.680000 audit[2573]: NETFILTER_CFG table=filter:103 family=2 entries=13 op=nft_register_rule pid=2573 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:43:29.680000 audit[2573]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=fffff371eca0 a2=0 a3=ffff921b76c0 items=0 ppid=2347 pid=2573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:29.680000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:43:29.681000 audit[2573]: NETFILTER_CFG table=nat:104 family=2 entries=20 op=nft_register_rule pid=2573 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:43:29.681000 audit[2573]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=fffff371eca0 a2=0 a3=ffff921b76c0 items=0 ppid=2347 pid=2573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:29.681000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:43:29.723000 audit[2599]: NETFILTER_CFG table=filter:105 family=2 entries=14 op=nft_register_rule pid=2599 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:43:29.723000 audit[2599]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=ffffd8fd7150 a2=0 a3=ffffbac6e6c0 items=0 ppid=2347 pid=2599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:29.723000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:43:29.723000 audit[2599]: NETFILTER_CFG table=nat:106 family=2 entries=20 op=nft_register_rule pid=2599 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:43:29.723000 audit[2599]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffd8fd7150 a2=0 a3=ffffbac6e6c0 items=0 ppid=2347 pid=2599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:29.723000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:43:29.774306 kubelet[2163]: I0209 09:43:29.774260 2163 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:43:29.814209 kubelet[2163]: I0209 09:43:29.814171 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5aea9432-d2e6-4f38-8a18-bb5f9bd341b1-typha-certs\") pod \"calico-typha-54fbfb5ccc-khn4z\" (UID: \"5aea9432-d2e6-4f38-8a18-bb5f9bd341b1\") " pod="calico-system/calico-typha-54fbfb5ccc-khn4z" Feb 9 09:43:29.814442 kubelet[2163]: I0209 09:43:29.814223 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5aea9432-d2e6-4f38-8a18-bb5f9bd341b1-tigera-ca-bundle\") pod \"calico-typha-54fbfb5ccc-khn4z\" (UID: \"5aea9432-d2e6-4f38-8a18-bb5f9bd341b1\") " pod="calico-system/calico-typha-54fbfb5ccc-khn4z" Feb 9 09:43:29.814442 kubelet[2163]: I0209 09:43:29.814248 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nfsb\" (UniqueName: \"kubernetes.io/projected/5aea9432-d2e6-4f38-8a18-bb5f9bd341b1-kube-api-access-8nfsb\") pod \"calico-typha-54fbfb5ccc-khn4z\" (UID: \"5aea9432-d2e6-4f38-8a18-bb5f9bd341b1\") " pod="calico-system/calico-typha-54fbfb5ccc-khn4z" Feb 9 09:43:29.838518 kubelet[2163]: I0209 09:43:29.838457 2163 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:43:29.915311 kubelet[2163]: I0209 09:43:29.915242 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1c8b5181-39f5-4bf3-9442-dae93af0e170-node-certs\") pod \"calico-node-wb9kl\" (UID: \"1c8b5181-39f5-4bf3-9442-dae93af0e170\") " pod="calico-system/calico-node-wb9kl" Feb 9 09:43:29.915472 kubelet[2163]: I0209 09:43:29.915329 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1c8b5181-39f5-4bf3-9442-dae93af0e170-cni-bin-dir\") pod \"calico-node-wb9kl\" (UID: \"1c8b5181-39f5-4bf3-9442-dae93af0e170\") " pod="calico-system/calico-node-wb9kl" Feb 9 09:43:29.915472 kubelet[2163]: I0209 09:43:29.915360 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1c8b5181-39f5-4bf3-9442-dae93af0e170-cni-net-dir\") pod \"calico-node-wb9kl\" (UID: \"1c8b5181-39f5-4bf3-9442-dae93af0e170\") " pod="calico-system/calico-node-wb9kl" Feb 9 09:43:29.915472 kubelet[2163]: I0209 09:43:29.915383 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pzsw\" (UniqueName: \"kubernetes.io/projected/1c8b5181-39f5-4bf3-9442-dae93af0e170-kube-api-access-2pzsw\") pod \"calico-node-wb9kl\" (UID: \"1c8b5181-39f5-4bf3-9442-dae93af0e170\") " pod="calico-system/calico-node-wb9kl" Feb 9 09:43:29.915472 kubelet[2163]: I0209 09:43:29.915407 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1c8b5181-39f5-4bf3-9442-dae93af0e170-var-run-calico\") pod \"calico-node-wb9kl\" (UID: \"1c8b5181-39f5-4bf3-9442-dae93af0e170\") " pod="calico-system/calico-node-wb9kl" Feb 9 09:43:29.915472 kubelet[2163]: I0209 09:43:29.915433 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1c8b5181-39f5-4bf3-9442-dae93af0e170-policysync\") pod \"calico-node-wb9kl\" (UID: \"1c8b5181-39f5-4bf3-9442-dae93af0e170\") " pod="calico-system/calico-node-wb9kl" Feb 9 09:43:29.915603 kubelet[2163]: I0209 09:43:29.915458 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1c8b5181-39f5-4bf3-9442-dae93af0e170-cni-log-dir\") pod \"calico-node-wb9kl\" (UID: \"1c8b5181-39f5-4bf3-9442-dae93af0e170\") " pod="calico-system/calico-node-wb9kl" Feb 9 09:43:29.915603 kubelet[2163]: I0209 09:43:29.915480 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1c8b5181-39f5-4bf3-9442-dae93af0e170-flexvol-driver-host\") pod \"calico-node-wb9kl\" (UID: \"1c8b5181-39f5-4bf3-9442-dae93af0e170\") " pod="calico-system/calico-node-wb9kl" Feb 9 09:43:29.915603 kubelet[2163]: I0209 09:43:29.915502 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c8b5181-39f5-4bf3-9442-dae93af0e170-xtables-lock\") pod \"calico-node-wb9kl\" (UID: \"1c8b5181-39f5-4bf3-9442-dae93af0e170\") " pod="calico-system/calico-node-wb9kl" Feb 9 09:43:29.915603 kubelet[2163]: I0209 09:43:29.915526 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1c8b5181-39f5-4bf3-9442-dae93af0e170-var-lib-calico\") pod \"calico-node-wb9kl\" (UID: \"1c8b5181-39f5-4bf3-9442-dae93af0e170\") " pod="calico-system/calico-node-wb9kl" Feb 9 09:43:29.915603 kubelet[2163]: I0209 09:43:29.915558 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c8b5181-39f5-4bf3-9442-dae93af0e170-lib-modules\") pod \"calico-node-wb9kl\" (UID: \"1c8b5181-39f5-4bf3-9442-dae93af0e170\") " pod="calico-system/calico-node-wb9kl" Feb 9 09:43:29.915723 kubelet[2163]: I0209 09:43:29.915578 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c8b5181-39f5-4bf3-9442-dae93af0e170-tigera-ca-bundle\") pod \"calico-node-wb9kl\" (UID: \"1c8b5181-39f5-4bf3-9442-dae93af0e170\") " pod="calico-system/calico-node-wb9kl" Feb 9 09:43:29.947421 kubelet[2163]: I0209 09:43:29.947318 2163 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:43:29.947848 kubelet[2163]: E0209 09:43:29.947822 2163 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dvjfd" podUID=440d5d1f-6bf0-4b9d-b4ee-c3a791745bc9 Feb 9 09:43:30.016537 kubelet[2163]: I0209 09:43:30.016497 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4l4x\" (UniqueName: \"kubernetes.io/projected/440d5d1f-6bf0-4b9d-b4ee-c3a791745bc9-kube-api-access-v4l4x\") pod \"csi-node-driver-dvjfd\" (UID: \"440d5d1f-6bf0-4b9d-b4ee-c3a791745bc9\") " pod="calico-system/csi-node-driver-dvjfd" Feb 9 09:43:30.016674 kubelet[2163]: I0209 09:43:30.016567 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/440d5d1f-6bf0-4b9d-b4ee-c3a791745bc9-varrun\") pod \"csi-node-driver-dvjfd\" (UID: \"440d5d1f-6bf0-4b9d-b4ee-c3a791745bc9\") " pod="calico-system/csi-node-driver-dvjfd" Feb 9 09:43:30.016674 kubelet[2163]: I0209 09:43:30.016660 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/440d5d1f-6bf0-4b9d-b4ee-c3a791745bc9-socket-dir\") pod \"csi-node-driver-dvjfd\" (UID: \"440d5d1f-6bf0-4b9d-b4ee-c3a791745bc9\") " pod="calico-system/csi-node-driver-dvjfd" Feb 9 09:43:30.016753 kubelet[2163]: I0209 09:43:30.016680 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/440d5d1f-6bf0-4b9d-b4ee-c3a791745bc9-kubelet-dir\") pod \"csi-node-driver-dvjfd\" (UID: \"440d5d1f-6bf0-4b9d-b4ee-c3a791745bc9\") " pod="calico-system/csi-node-driver-dvjfd" Feb 9 09:43:30.016753 kubelet[2163]: I0209 09:43:30.016701 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/440d5d1f-6bf0-4b9d-b4ee-c3a791745bc9-registration-dir\") pod \"csi-node-driver-dvjfd\" (UID: \"440d5d1f-6bf0-4b9d-b4ee-c3a791745bc9\") " pod="calico-system/csi-node-driver-dvjfd" Feb 9 09:43:30.017482 kubelet[2163]: E0209 09:43:30.017463 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.017586 kubelet[2163]: W0209 09:43:30.017571 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.017656 kubelet[2163]: E0209 09:43:30.017645 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.017933 kubelet[2163]: E0209 09:43:30.017919 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.018015 kubelet[2163]: W0209 09:43:30.018003 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.018079 kubelet[2163]: E0209 09:43:30.018069 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.018315 kubelet[2163]: E0209 09:43:30.018303 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.018402 kubelet[2163]: W0209 09:43:30.018388 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.018460 kubelet[2163]: E0209 09:43:30.018450 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.018748 kubelet[2163]: E0209 09:43:30.018735 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.018833 kubelet[2163]: W0209 09:43:30.018821 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.018952 kubelet[2163]: E0209 09:43:30.018927 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.019136 kubelet[2163]: E0209 09:43:30.019124 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.019225 kubelet[2163]: W0209 09:43:30.019211 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.019336 kubelet[2163]: E0209 09:43:30.019319 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.019539 kubelet[2163]: E0209 09:43:30.019527 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.019624 kubelet[2163]: W0209 09:43:30.019610 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.019717 kubelet[2163]: E0209 09:43:30.019697 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.019903 kubelet[2163]: E0209 09:43:30.019891 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.019975 kubelet[2163]: W0209 09:43:30.019962 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.020082 kubelet[2163]: E0209 09:43:30.020063 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.020276 kubelet[2163]: E0209 09:43:30.020263 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.020380 kubelet[2163]: W0209 09:43:30.020367 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.020494 kubelet[2163]: E0209 09:43:30.020474 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.020677 kubelet[2163]: E0209 09:43:30.020662 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.020743 kubelet[2163]: W0209 09:43:30.020731 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.020848 kubelet[2163]: E0209 09:43:30.020825 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.021023 kubelet[2163]: E0209 09:43:30.021012 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.021093 kubelet[2163]: W0209 09:43:30.021082 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.021216 kubelet[2163]: E0209 09:43:30.021196 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.021487 kubelet[2163]: E0209 09:43:30.021466 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.021564 kubelet[2163]: W0209 09:43:30.021551 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.021699 kubelet[2163]: E0209 09:43:30.021680 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.022351 kubelet[2163]: E0209 09:43:30.022296 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.022495 kubelet[2163]: W0209 09:43:30.022481 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.022662 kubelet[2163]: E0209 09:43:30.022645 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.023025 kubelet[2163]: E0209 09:43:30.023010 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.023123 kubelet[2163]: W0209 09:43:30.023109 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.023193 kubelet[2163]: E0209 09:43:30.023183 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.023574 kubelet[2163]: E0209 09:43:30.023560 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.023659 kubelet[2163]: W0209 09:43:30.023646 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.023726 kubelet[2163]: E0209 09:43:30.023717 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.024059 kubelet[2163]: E0209 09:43:30.024045 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.024136 kubelet[2163]: W0209 09:43:30.024123 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.024227 kubelet[2163]: E0209 09:43:30.024212 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.024500 kubelet[2163]: E0209 09:43:30.024486 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.024598 kubelet[2163]: W0209 09:43:30.024576 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.024719 kubelet[2163]: E0209 09:43:30.024696 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.024923 kubelet[2163]: E0209 09:43:30.024911 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.024987 kubelet[2163]: W0209 09:43:30.024976 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.025053 kubelet[2163]: E0209 09:43:30.025043 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.025338 kubelet[2163]: E0209 09:43:30.025324 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.025427 kubelet[2163]: W0209 09:43:30.025414 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.025491 kubelet[2163]: E0209 09:43:30.025481 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.117161 kubelet[2163]: E0209 09:43:30.117131 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.117161 kubelet[2163]: W0209 09:43:30.117157 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.117360 kubelet[2163]: E0209 09:43:30.117179 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.117405 kubelet[2163]: E0209 09:43:30.117389 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.117405 kubelet[2163]: W0209 09:43:30.117401 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.117475 kubelet[2163]: E0209 09:43:30.117413 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.117600 kubelet[2163]: E0209 09:43:30.117589 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.117642 kubelet[2163]: W0209 09:43:30.117600 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.117642 kubelet[2163]: E0209 09:43:30.117615 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.117783 kubelet[2163]: E0209 09:43:30.117770 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.117783 kubelet[2163]: W0209 09:43:30.117781 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.117865 kubelet[2163]: E0209 09:43:30.117792 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.117967 kubelet[2163]: E0209 09:43:30.117940 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.117967 kubelet[2163]: W0209 09:43:30.117947 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.117967 kubelet[2163]: E0209 09:43:30.117958 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.118196 kubelet[2163]: E0209 09:43:30.118182 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.118196 kubelet[2163]: W0209 09:43:30.118195 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.118325 kubelet[2163]: E0209 09:43:30.118212 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.118424 kubelet[2163]: E0209 09:43:30.118406 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.118424 kubelet[2163]: W0209 09:43:30.118418 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.118791 kubelet[2163]: E0209 09:43:30.118430 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.118791 kubelet[2163]: E0209 09:43:30.118607 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.118791 kubelet[2163]: W0209 09:43:30.118614 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.118791 kubelet[2163]: E0209 09:43:30.118624 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.119448 kubelet[2163]: E0209 09:43:30.118832 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.119448 kubelet[2163]: W0209 09:43:30.118842 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.119448 kubelet[2163]: E0209 09:43:30.118857 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.119448 kubelet[2163]: E0209 09:43:30.119279 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.119448 kubelet[2163]: W0209 09:43:30.119289 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.119448 kubelet[2163]: E0209 09:43:30.119324 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.119979 kubelet[2163]: E0209 09:43:30.119966 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.119979 kubelet[2163]: W0209 09:43:30.119979 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.120059 kubelet[2163]: E0209 09:43:30.119997 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.120172 kubelet[2163]: E0209 09:43:30.120161 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.120172 kubelet[2163]: W0209 09:43:30.120171 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.120249 kubelet[2163]: E0209 09:43:30.120186 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.120345 kubelet[2163]: E0209 09:43:30.120336 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.120345 kubelet[2163]: W0209 09:43:30.120345 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.120471 kubelet[2163]: E0209 09:43:30.120452 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.120539 kubelet[2163]: E0209 09:43:30.120467 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.120599 kubelet[2163]: W0209 09:43:30.120586 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.120752 kubelet[2163]: E0209 09:43:30.120741 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.120871 kubelet[2163]: E0209 09:43:30.120861 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.120937 kubelet[2163]: W0209 09:43:30.120926 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.121060 kubelet[2163]: E0209 09:43:30.121051 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.121254 kubelet[2163]: E0209 09:43:30.121243 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.121361 kubelet[2163]: W0209 09:43:30.121348 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.121509 kubelet[2163]: E0209 09:43:30.121497 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.121625 kubelet[2163]: E0209 09:43:30.121614 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.121684 kubelet[2163]: W0209 09:43:30.121674 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.121759 kubelet[2163]: E0209 09:43:30.121749 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.122036 kubelet[2163]: E0209 09:43:30.122023 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.122115 kubelet[2163]: W0209 09:43:30.122103 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.122267 kubelet[2163]: E0209 09:43:30.122255 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.122483 kubelet[2163]: E0209 09:43:30.122472 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.122555 kubelet[2163]: W0209 09:43:30.122543 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.122700 kubelet[2163]: E0209 09:43:30.122689 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.122843 kubelet[2163]: E0209 09:43:30.122832 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.122910 kubelet[2163]: W0209 09:43:30.122898 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.123030 kubelet[2163]: E0209 09:43:30.123005 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.123245 kubelet[2163]: E0209 09:43:30.123231 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.123356 kubelet[2163]: W0209 09:43:30.123342 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.123511 kubelet[2163]: E0209 09:43:30.123500 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.123656 kubelet[2163]: E0209 09:43:30.123643 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.123724 kubelet[2163]: W0209 09:43:30.123713 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.123790 kubelet[2163]: E0209 09:43:30.123781 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.124063 kubelet[2163]: E0209 09:43:30.124049 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.124174 kubelet[2163]: W0209 09:43:30.124145 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.124261 kubelet[2163]: E0209 09:43:30.124247 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.124661 kubelet[2163]: E0209 09:43:30.124643 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.124805 kubelet[2163]: W0209 09:43:30.124789 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.124864 kubelet[2163]: E0209 09:43:30.124855 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.125117 kubelet[2163]: E0209 09:43:30.125103 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.125220 kubelet[2163]: W0209 09:43:30.125198 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.125317 kubelet[2163]: E0209 09:43:30.125293 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.125641 kubelet[2163]: E0209 09:43:30.125628 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.125716 kubelet[2163]: W0209 09:43:30.125703 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.125848 kubelet[2163]: E0209 09:43:30.125825 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.126054 kubelet[2163]: E0209 09:43:30.126040 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.126131 kubelet[2163]: W0209 09:43:30.126116 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.126219 kubelet[2163]: E0209 09:43:30.126207 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.187169 kubelet[2163]: E0209 09:43:30.187138 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.187338 kubelet[2163]: W0209 09:43:30.187321 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.187409 kubelet[2163]: E0209 09:43:30.187399 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.219573 kubelet[2163]: E0209 09:43:30.219473 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.219573 kubelet[2163]: W0209 09:43:30.219494 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.219573 kubelet[2163]: E0209 09:43:30.219514 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.219760 kubelet[2163]: E0209 09:43:30.219672 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.219760 kubelet[2163]: W0209 09:43:30.219682 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.219760 kubelet[2163]: E0209 09:43:30.219692 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.320454 kubelet[2163]: E0209 09:43:30.320429 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.320454 kubelet[2163]: W0209 09:43:30.320449 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.320605 kubelet[2163]: E0209 09:43:30.320467 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.320697 kubelet[2163]: E0209 09:43:30.320687 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.320697 kubelet[2163]: W0209 09:43:30.320697 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.320753 kubelet[2163]: E0209 09:43:30.320707 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.377123 kubelet[2163]: E0209 09:43:30.377085 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:30.379801 env[1222]: time="2024-02-09T09:43:30.379681379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54fbfb5ccc-khn4z,Uid:5aea9432-d2e6-4f38-8a18-bb5f9bd341b1,Namespace:calico-system,Attempt:0,}" Feb 9 09:43:30.399358 env[1222]: time="2024-02-09T09:43:30.398856185Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:43:30.399358 env[1222]: time="2024-02-09T09:43:30.398906344Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:43:30.399358 env[1222]: time="2024-02-09T09:43:30.398916624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:43:30.399358 env[1222]: time="2024-02-09T09:43:30.399060542Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d9475b8276307570592a49d0518b93f263797d13c75d6ffeb457f5ffe468b62f pid=2660 runtime=io.containerd.runc.v2 Feb 9 09:43:30.421917 kubelet[2163]: E0209 09:43:30.421774 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.421917 kubelet[2163]: W0209 09:43:30.421795 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.421917 kubelet[2163]: E0209 09:43:30.421818 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.422401 kubelet[2163]: E0209 09:43:30.422339 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.422401 kubelet[2163]: W0209 09:43:30.422354 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.422401 kubelet[2163]: E0209 09:43:30.422369 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.474715 env[1222]: time="2024-02-09T09:43:30.474593021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54fbfb5ccc-khn4z,Uid:5aea9432-d2e6-4f38-8a18-bb5f9bd341b1,Namespace:calico-system,Attempt:0,} returns sandbox id \"d9475b8276307570592a49d0518b93f263797d13c75d6ffeb457f5ffe468b62f\"" Feb 9 09:43:30.475637 kubelet[2163]: E0209 09:43:30.475489 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:30.477251 env[1222]: time="2024-02-09T09:43:30.477216583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.27.0\"" Feb 9 09:43:30.523627 kubelet[2163]: E0209 09:43:30.523596 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.523627 kubelet[2163]: W0209 09:43:30.523620 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.523627 kubelet[2163]: E0209 09:43:30.523638 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.523962 kubelet[2163]: E0209 09:43:30.523950 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.523962 kubelet[2163]: W0209 09:43:30.523961 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.524028 kubelet[2163]: E0209 09:43:30.523973 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.586820 kubelet[2163]: E0209 09:43:30.586795 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.586820 kubelet[2163]: W0209 09:43:30.586813 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.587011 kubelet[2163]: E0209 09:43:30.586832 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.624971 kubelet[2163]: E0209 09:43:30.624940 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.624971 kubelet[2163]: W0209 09:43:30.624967 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.625138 kubelet[2163]: E0209 09:43:30.624988 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.726482 kubelet[2163]: E0209 09:43:30.726395 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.726633 kubelet[2163]: W0209 09:43:30.726607 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.726710 kubelet[2163]: E0209 09:43:30.726699 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.741525 kubelet[2163]: E0209 09:43:30.741498 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:30.742200 env[1222]: time="2024-02-09T09:43:30.742161952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wb9kl,Uid:1c8b5181-39f5-4bf3-9442-dae93af0e170,Namespace:calico-system,Attempt:0,}" Feb 9 09:43:30.755171 env[1222]: time="2024-02-09T09:43:30.755104407Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:43:30.755351 env[1222]: time="2024-02-09T09:43:30.755153927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:43:30.755351 env[1222]: time="2024-02-09T09:43:30.755165566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:43:30.755419 env[1222]: time="2024-02-09T09:43:30.755372083Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/06deff8d02ee7e3b471dd928c1a3b9d1393da647bdaa42c60f748a162bdccbd9 pid=2723 runtime=io.containerd.runc.v2 Feb 9 09:43:30.770000 audit[2753]: NETFILTER_CFG table=filter:107 family=2 entries=14 op=nft_register_rule pid=2753 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:43:30.771745 kernel: kauditd_printk_skb: 134 callbacks suppressed Feb 9 09:43:30.771810 kernel: audit: type=1325 audit(1707471810.770:276): table=filter:107 family=2 entries=14 op=nft_register_rule pid=2753 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:43:30.770000 audit[2753]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=ffffcb3279d0 a2=0 a3=ffffa642e6c0 items=0 ppid=2347 pid=2753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:30.776096 kernel: audit: type=1300 audit(1707471810.770:276): arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=ffffcb3279d0 a2=0 a3=ffffa642e6c0 items=0 ppid=2347 pid=2753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:30.776180 kernel: audit: type=1327 audit(1707471810.770:276): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:43:30.770000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:43:30.771000 audit[2753]: NETFILTER_CFG table=nat:108 family=2 entries=20 op=nft_register_rule pid=2753 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:43:30.771000 audit[2753]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffcb3279d0 a2=0 a3=ffffa642e6c0 items=0 ppid=2347 pid=2753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:30.790169 kernel: audit: type=1325 audit(1707471810.771:277): table=nat:108 family=2 entries=20 op=nft_register_rule pid=2753 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:43:30.790245 kernel: audit: type=1300 audit(1707471810.771:277): arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffcb3279d0 a2=0 a3=ffffa642e6c0 items=0 ppid=2347 pid=2753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:30.771000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:43:30.792026 kubelet[2163]: E0209 09:43:30.792008 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:30.792026 kubelet[2163]: W0209 09:43:30.792026 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:30.792405 kernel: audit: type=1327 audit(1707471810.771:277): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:43:30.792439 kubelet[2163]: E0209 09:43:30.792059 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:30.804113 env[1222]: time="2024-02-09T09:43:30.804059547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wb9kl,Uid:1c8b5181-39f5-4bf3-9442-dae93af0e170,Namespace:calico-system,Attempt:0,} returns sandbox id \"06deff8d02ee7e3b471dd928c1a3b9d1393da647bdaa42c60f748a162bdccbd9\"" Feb 9 09:43:30.805259 kubelet[2163]: E0209 09:43:30.804818 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:31.848088 kubelet[2163]: E0209 09:43:31.848056 2163 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dvjfd" podUID=440d5d1f-6bf0-4b9d-b4ee-c3a791745bc9 Feb 9 09:43:31.863074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount597895153.mount: Deactivated successfully. Feb 9 09:43:32.623317 env[1222]: time="2024-02-09T09:43:32.623251726Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:32.624366 env[1222]: time="2024-02-09T09:43:32.624340711Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fba96c9caf161e105c76b559b06b4b2337b89b54833d69984209161d93145969,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:32.625681 env[1222]: time="2024-02-09T09:43:32.625643294Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:32.627012 env[1222]: time="2024-02-09T09:43:32.626983477Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:5f2d3b8c354a4eb6de46e786889913916e620c6c256982fb8d0f1a1d36a282bc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:32.627629 env[1222]: time="2024-02-09T09:43:32.627602709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.27.0\" returns image reference \"sha256:fba96c9caf161e105c76b559b06b4b2337b89b54833d69984209161d93145969\"" Feb 9 09:43:32.630325 env[1222]: time="2024-02-09T09:43:32.628602095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\"" Feb 9 09:43:32.636968 env[1222]: time="2024-02-09T09:43:32.636914786Z" level=info msg="CreateContainer within sandbox \"d9475b8276307570592a49d0518b93f263797d13c75d6ffeb457f5ffe468b62f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 9 09:43:32.645433 env[1222]: time="2024-02-09T09:43:32.645384075Z" level=info msg="CreateContainer within sandbox \"d9475b8276307570592a49d0518b93f263797d13c75d6ffeb457f5ffe468b62f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5316f5926b828235ace7f7529ba2741662d3df6674dc4d4d2629998bd1978b2e\"" Feb 9 09:43:32.645816 env[1222]: time="2024-02-09T09:43:32.645790110Z" level=info msg="StartContainer for \"5316f5926b828235ace7f7529ba2741662d3df6674dc4d4d2629998bd1978b2e\"" Feb 9 09:43:32.735021 env[1222]: time="2024-02-09T09:43:32.734971418Z" level=info msg="StartContainer for \"5316f5926b828235ace7f7529ba2741662d3df6674dc4d4d2629998bd1978b2e\" returns successfully" Feb 9 09:43:32.904664 kubelet[2163]: E0209 09:43:32.904561 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:32.913989 kubelet[2163]: I0209 09:43:32.913944 2163 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-54fbfb5ccc-khn4z" podStartSLOduration=-9.223372032940866e+09 pod.CreationTimestamp="2024-02-09 09:43:29 +0000 UTC" firstStartedPulling="2024-02-09 09:43:30.476710591 +0000 UTC m=+20.769491002" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:43:32.913144236 +0000 UTC m=+23.205924687" watchObservedRunningTime="2024-02-09 09:43:32.913908986 +0000 UTC m=+23.206689397" Feb 9 09:43:32.936188 kubelet[2163]: E0209 09:43:32.936151 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:32.936188 kubelet[2163]: W0209 09:43:32.936175 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:32.936188 kubelet[2163]: E0209 09:43:32.936194 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:32.936402 kubelet[2163]: E0209 09:43:32.936334 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:32.936402 kubelet[2163]: W0209 09:43:32.936340 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:32.936402 kubelet[2163]: E0209 09:43:32.936351 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:32.936553 kubelet[2163]: E0209 09:43:32.936531 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:32.936553 kubelet[2163]: W0209 09:43:32.936541 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:32.936553 kubelet[2163]: E0209 09:43:32.936553 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:32.936723 kubelet[2163]: E0209 09:43:32.936701 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:32.936723 kubelet[2163]: W0209 09:43:32.936715 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:32.936723 kubelet[2163]: E0209 09:43:32.936726 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:32.936865 kubelet[2163]: E0209 09:43:32.936847 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:32.936865 kubelet[2163]: W0209 09:43:32.936857 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:32.936865 kubelet[2163]: E0209 09:43:32.936865 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:32.937014 kubelet[2163]: E0209 09:43:32.936995 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:32.937014 kubelet[2163]: W0209 09:43:32.937010 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:32.937073 kubelet[2163]: E0209 09:43:32.937019 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:32.937246 kubelet[2163]: E0209 09:43:32.937224 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:32.937246 kubelet[2163]: W0209 09:43:32.937241 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:32.937312 kubelet[2163]: E0209 09:43:32.937253 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:32.937420 kubelet[2163]: E0209 09:43:32.937406 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:32.937420 kubelet[2163]: W0209 09:43:32.937418 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:32.937488 kubelet[2163]: E0209 09:43:32.937430 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:32.937583 kubelet[2163]: E0209 09:43:32.937568 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:32.937583 kubelet[2163]: W0209 09:43:32.937581 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:32.937636 kubelet[2163]: E0209 09:43:32.937592 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:32.937794 kubelet[2163]: E0209 09:43:32.937782 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:32.937823 kubelet[2163]: W0209 09:43:32.937794 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:32.937890 kubelet[2163]: E0209 09:43:32.937858 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:32.938092 kubelet[2163]: E0209 09:43:32.938053 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:32.938092 kubelet[2163]: W0209 09:43:32.938087 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:32.938182 kubelet[2163]: E0209 09:43:32.938109 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:32.938327 kubelet[2163]: E0209 09:43:32.938312 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:32.938327 kubelet[2163]: W0209 09:43:32.938324 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:32.938412 kubelet[2163]: E0209 09:43:32.938336 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:32.943525 kubelet[2163]: E0209 09:43:32.943391 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:32.943525 kubelet[2163]: W0209 09:43:32.943407 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:32.943525 kubelet[2163]: E0209 09:43:32.943423 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:32.943822 kubelet[2163]: E0209 09:43:32.943700 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:32.943822 kubelet[2163]: W0209 09:43:32.943711 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:32.943822 kubelet[2163]: E0209 09:43:32.943728 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:32.944109 kubelet[2163]: E0209 09:43:32.943972 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:32.944109 kubelet[2163]: W0209 09:43:32.943986 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:32.944109 kubelet[2163]: E0209 09:43:32.944003 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:32.944373 kubelet[2163]: E0209 09:43:32.944254 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:32.944373 kubelet[2163]: W0209 09:43:32.944266 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:32.944373 kubelet[2163]: E0209 09:43:32.944279 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:32.944628 kubelet[2163]: E0209 09:43:32.944529 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:32.944628 kubelet[2163]: W0209 09:43:32.944540 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:32.944628 kubelet[2163]: E0209 09:43:32.944558 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:32.944918 kubelet[2163]: E0209 09:43:32.944764 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:32.944918 kubelet[2163]: W0209 09:43:32.944775 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:32.944918 kubelet[2163]: E0209 09:43:32.944812 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:32.945187 kubelet[2163]: E0209 09:43:32.945088 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:32.945187 kubelet[2163]: W0209 09:43:32.945108 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:32.945187 kubelet[2163]: E0209 09:43:32.945164 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:32.945433 kubelet[2163]: E0209 09:43:32.945352 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:32.945433 kubelet[2163]: W0209 09:43:32.945363 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:32.945433 kubelet[2163]: E0209 09:43:32.945406 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:32.945815 kubelet[2163]: E0209 09:43:32.945584 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:32.945815 kubelet[2163]: W0209 09:43:32.945596 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:32.945815 kubelet[2163]: E0209 09:43:32.945609 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:32.946098 kubelet[2163]: E0209 09:43:32.945982 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:32.946098 kubelet[2163]: W0209 09:43:32.945993 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:32.946098 kubelet[2163]: E0209 09:43:32.946005 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:32.946388 kubelet[2163]: E0209 09:43:32.946270 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:32.946388 kubelet[2163]: W0209 09:43:32.946283 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:32.946388 kubelet[2163]: E0209 09:43:32.946295 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:32.946675 kubelet[2163]: E0209 09:43:32.946556 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:32.946675 kubelet[2163]: W0209 09:43:32.946567 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:32.946675 kubelet[2163]: E0209 09:43:32.946580 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:32.946922 kubelet[2163]: E0209 09:43:32.946840 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:32.946922 kubelet[2163]: W0209 09:43:32.946852 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:32.946922 kubelet[2163]: E0209 09:43:32.946895 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:32.947160 kubelet[2163]: E0209 09:43:32.947065 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:32.947160 kubelet[2163]: W0209 09:43:32.947075 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:32.947160 kubelet[2163]: E0209 09:43:32.947086 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:32.947459 kubelet[2163]: E0209 09:43:32.947341 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:32.947459 kubelet[2163]: W0209 09:43:32.947352 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:32.947459 kubelet[2163]: E0209 09:43:32.947364 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:32.947860 kubelet[2163]: E0209 09:43:32.947624 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:32.947860 kubelet[2163]: W0209 09:43:32.947635 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:32.947860 kubelet[2163]: E0209 09:43:32.947648 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:32.948714 kubelet[2163]: E0209 09:43:32.948035 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:32.948714 kubelet[2163]: W0209 09:43:32.948048 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:32.948714 kubelet[2163]: E0209 09:43:32.948061 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:32.948980 kubelet[2163]: E0209 09:43:32.948923 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:32.948980 kubelet[2163]: W0209 09:43:32.948937 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:32.948980 kubelet[2163]: E0209 09:43:32.948952 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:33.849264 kubelet[2163]: E0209 09:43:33.848408 2163 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dvjfd" podUID=440d5d1f-6bf0-4b9d-b4ee-c3a791745bc9 Feb 9 09:43:33.906072 kubelet[2163]: I0209 09:43:33.905395 2163 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 9 09:43:33.906072 kubelet[2163]: E0209 09:43:33.906007 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:33.916292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount367937410.mount: Deactivated successfully. Feb 9 09:43:33.945079 kubelet[2163]: E0209 09:43:33.945051 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:33.945079 kubelet[2163]: W0209 09:43:33.945074 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:33.945260 kubelet[2163]: E0209 09:43:33.945101 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:33.945348 kubelet[2163]: E0209 09:43:33.945336 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:33.945348 kubelet[2163]: W0209 09:43:33.945348 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:33.945424 kubelet[2163]: E0209 09:43:33.945361 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:33.945538 kubelet[2163]: E0209 09:43:33.945527 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:33.945574 kubelet[2163]: W0209 09:43:33.945539 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:33.945574 kubelet[2163]: E0209 09:43:33.945550 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:33.945757 kubelet[2163]: E0209 09:43:33.945739 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:33.945757 kubelet[2163]: W0209 09:43:33.945749 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:33.945757 kubelet[2163]: E0209 09:43:33.945760 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:33.945938 kubelet[2163]: E0209 09:43:33.945927 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:33.945970 kubelet[2163]: W0209 09:43:33.945938 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:33.945970 kubelet[2163]: E0209 09:43:33.945949 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:33.947383 kubelet[2163]: E0209 09:43:33.947369 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:33.947383 kubelet[2163]: W0209 09:43:33.947382 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:33.947466 kubelet[2163]: E0209 09:43:33.947395 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:33.947618 kubelet[2163]: E0209 09:43:33.947608 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:33.947618 kubelet[2163]: W0209 09:43:33.947618 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:33.947684 kubelet[2163]: E0209 09:43:33.947630 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:33.947785 kubelet[2163]: E0209 09:43:33.947775 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:33.947821 kubelet[2163]: W0209 09:43:33.947785 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:33.947821 kubelet[2163]: E0209 09:43:33.947805 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:33.947970 kubelet[2163]: E0209 09:43:33.947958 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:33.948002 kubelet[2163]: W0209 09:43:33.947970 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:33.948002 kubelet[2163]: E0209 09:43:33.947983 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:33.949049 kubelet[2163]: E0209 09:43:33.949035 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:33.949049 kubelet[2163]: W0209 09:43:33.949048 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:33.949134 kubelet[2163]: E0209 09:43:33.949060 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:33.949251 kubelet[2163]: E0209 09:43:33.949240 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:33.949251 kubelet[2163]: W0209 09:43:33.949250 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:33.949330 kubelet[2163]: E0209 09:43:33.949261 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:33.949453 kubelet[2163]: E0209 09:43:33.949441 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:33.949487 kubelet[2163]: W0209 09:43:33.949453 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:33.949487 kubelet[2163]: E0209 09:43:33.949465 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:33.950746 kubelet[2163]: E0209 09:43:33.950716 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:33.950746 kubelet[2163]: W0209 09:43:33.950730 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:33.950746 kubelet[2163]: E0209 09:43:33.950743 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:33.950971 kubelet[2163]: E0209 09:43:33.950960 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:33.950971 kubelet[2163]: W0209 09:43:33.950971 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:33.951026 kubelet[2163]: E0209 09:43:33.950986 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:33.951187 kubelet[2163]: E0209 09:43:33.951176 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:33.951187 kubelet[2163]: W0209 09:43:33.951187 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:33.951252 kubelet[2163]: E0209 09:43:33.951202 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:33.951458 kubelet[2163]: E0209 09:43:33.951433 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:33.951458 kubelet[2163]: W0209 09:43:33.951456 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:33.951527 kubelet[2163]: E0209 09:43:33.951471 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:33.951615 kubelet[2163]: E0209 09:43:33.951603 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:33.951615 kubelet[2163]: W0209 09:43:33.951613 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:33.951680 kubelet[2163]: E0209 09:43:33.951622 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:33.951742 kubelet[2163]: E0209 09:43:33.951732 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:33.951742 kubelet[2163]: W0209 09:43:33.951741 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:33.951803 kubelet[2163]: E0209 09:43:33.951754 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:33.951928 kubelet[2163]: E0209 09:43:33.951904 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:33.951928 kubelet[2163]: W0209 09:43:33.951916 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:33.951928 kubelet[2163]: E0209 09:43:33.951929 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:33.952258 kubelet[2163]: E0209 09:43:33.952159 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:33.952258 kubelet[2163]: W0209 09:43:33.952173 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:33.952258 kubelet[2163]: E0209 09:43:33.952192 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:33.952543 kubelet[2163]: E0209 09:43:33.952458 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:33.952543 kubelet[2163]: W0209 09:43:33.952469 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:33.952543 kubelet[2163]: E0209 09:43:33.952502 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:33.952758 kubelet[2163]: E0209 09:43:33.952679 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:33.952758 kubelet[2163]: W0209 09:43:33.952689 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:33.952758 kubelet[2163]: E0209 09:43:33.952714 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:33.953016 kubelet[2163]: E0209 09:43:33.952906 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:33.953016 kubelet[2163]: W0209 09:43:33.952917 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:33.953016 kubelet[2163]: E0209 09:43:33.952935 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:33.953192 kubelet[2163]: E0209 09:43:33.953179 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:33.953253 kubelet[2163]: W0209 09:43:33.953243 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:33.953363 kubelet[2163]: E0209 09:43:33.953352 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:33.953575 kubelet[2163]: E0209 09:43:33.953544 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:33.953575 kubelet[2163]: W0209 09:43:33.953559 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:33.953575 kubelet[2163]: E0209 09:43:33.953576 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:33.953695 kubelet[2163]: E0209 09:43:33.953684 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:33.953695 kubelet[2163]: W0209 09:43:33.953693 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:33.953761 kubelet[2163]: E0209 09:43:33.953706 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:33.953858 kubelet[2163]: E0209 09:43:33.953846 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:33.953858 kubelet[2163]: W0209 09:43:33.953857 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:33.953918 kubelet[2163]: E0209 09:43:33.953872 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:33.954187 kubelet[2163]: E0209 09:43:33.954173 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:33.954255 kubelet[2163]: W0209 09:43:33.954244 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:33.954390 kubelet[2163]: E0209 09:43:33.954377 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:33.954575 kubelet[2163]: E0209 09:43:33.954551 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:33.954575 kubelet[2163]: W0209 09:43:33.954566 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:33.954575 kubelet[2163]: E0209 09:43:33.954578 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:33.954877 kubelet[2163]: E0209 09:43:33.954862 2163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:43:33.954956 kubelet[2163]: W0209 09:43:33.954942 2163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:43:33.955022 kubelet[2163]: E0209 09:43:33.955012 2163 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:43:33.991323 env[1222]: time="2024-02-09T09:43:33.991263152Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:33.992607 env[1222]: time="2024-02-09T09:43:33.992563736Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbddd33ed55a4a5c129e8f09945d426860425b9778d9402efe7bcefea7990a57,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:33.994087 env[1222]: time="2024-02-09T09:43:33.994048877Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:33.995627 env[1222]: time="2024-02-09T09:43:33.995602417Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b05edbd1f80db4ada229e6001a666a7dd36bb6ab617143684fb3d28abfc4b71e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:33.996010 env[1222]: time="2024-02-09T09:43:33.995983972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\" returns image reference \"sha256:cbddd33ed55a4a5c129e8f09945d426860425b9778d9402efe7bcefea7990a57\"" Feb 9 09:43:33.997839 env[1222]: time="2024-02-09T09:43:33.997791390Z" level=info msg="CreateContainer within sandbox \"06deff8d02ee7e3b471dd928c1a3b9d1393da647bdaa42c60f748a162bdccbd9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 9 09:43:34.009779 env[1222]: time="2024-02-09T09:43:34.009729924Z" level=info msg="CreateContainer within sandbox \"06deff8d02ee7e3b471dd928c1a3b9d1393da647bdaa42c60f748a162bdccbd9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"065cdf92141dca008016a556d40f6a59867f57a7f32f6aeca1ec17aa16409575\"" Feb 9 09:43:34.010209 env[1222]: time="2024-02-09T09:43:34.010183118Z" level=info msg="StartContainer for \"065cdf92141dca008016a556d40f6a59867f57a7f32f6aeca1ec17aa16409575\"" Feb 9 09:43:34.159329 env[1222]: time="2024-02-09T09:43:34.156442626Z" level=info msg="StartContainer for \"065cdf92141dca008016a556d40f6a59867f57a7f32f6aeca1ec17aa16409575\" returns successfully" Feb 9 09:43:34.186901 env[1222]: time="2024-02-09T09:43:34.186823258Z" level=info msg="shim disconnected" id=065cdf92141dca008016a556d40f6a59867f57a7f32f6aeca1ec17aa16409575 Feb 9 09:43:34.187094 env[1222]: time="2024-02-09T09:43:34.187053055Z" level=warning msg="cleaning up after shim disconnected" id=065cdf92141dca008016a556d40f6a59867f57a7f32f6aeca1ec17aa16409575 namespace=k8s.io Feb 9 09:43:34.187094 env[1222]: time="2024-02-09T09:43:34.187067055Z" level=info msg="cleaning up dead shim" Feb 9 09:43:34.194442 env[1222]: time="2024-02-09T09:43:34.194388246Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:43:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2924 runtime=io.containerd.runc.v2\n" Feb 9 09:43:34.907851 kubelet[2163]: E0209 09:43:34.907815 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:34.908877 env[1222]: time="2024-02-09T09:43:34.908841150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\"" Feb 9 09:43:35.848600 kubelet[2163]: E0209 09:43:35.848557 2163 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dvjfd" podUID=440d5d1f-6bf0-4b9d-b4ee-c3a791745bc9 Feb 9 09:43:36.217734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4026332076.mount: Deactivated successfully. Feb 9 09:43:36.722642 kubelet[2163]: I0209 09:43:36.722345 2163 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 9 09:43:36.723103 kubelet[2163]: E0209 09:43:36.722935 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:36.776000 audit[2972]: NETFILTER_CFG table=filter:109 family=2 entries=13 op=nft_register_rule pid=2972 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:43:36.776000 audit[2972]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=ffffcfa8a8e0 a2=0 a3=ffffbaedb6c0 items=0 ppid=2347 pid=2972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:36.783658 kernel: audit: type=1325 audit(1707471816.776:278): table=filter:109 family=2 entries=13 op=nft_register_rule pid=2972 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:43:36.783744 kernel: audit: type=1300 audit(1707471816.776:278): arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=ffffcfa8a8e0 a2=0 a3=ffffbaedb6c0 items=0 ppid=2347 pid=2972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:36.776000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:43:36.785546 kernel: audit: type=1327 audit(1707471816.776:278): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:43:36.785000 audit[2972]: NETFILTER_CFG table=nat:110 family=2 entries=27 op=nft_register_chain pid=2972 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:43:36.785000 audit[2972]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8836 a0=3 a1=ffffcfa8a8e0 a2=0 a3=ffffbaedb6c0 items=0 ppid=2347 pid=2972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:36.798318 kernel: audit: type=1325 audit(1707471816.785:279): table=nat:110 family=2 entries=27 op=nft_register_chain pid=2972 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:43:36.798399 kernel: audit: type=1300 audit(1707471816.785:279): arch=c00000b7 syscall=211 success=yes exit=8836 a0=3 a1=ffffcfa8a8e0 a2=0 a3=ffffbaedb6c0 items=0 ppid=2347 pid=2972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:36.798429 kernel: audit: type=1327 audit(1707471816.785:279): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:43:36.785000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:43:36.911741 kubelet[2163]: E0209 09:43:36.911708 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:37.849770 kubelet[2163]: E0209 09:43:37.849726 2163 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dvjfd" podUID=440d5d1f-6bf0-4b9d-b4ee-c3a791745bc9 Feb 9 09:43:38.581220 env[1222]: time="2024-02-09T09:43:38.581169236Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:38.583026 env[1222]: time="2024-02-09T09:43:38.582988897Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9c9318f5fbf505fc3d84676966009a3887e58ea1e3eac10039e5a96dfceb254b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:38.584439 env[1222]: time="2024-02-09T09:43:38.584414323Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:38.586385 env[1222]: time="2024-02-09T09:43:38.586357142Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:d943b4c23e82a39b0186a1a3b2fe8f728e543d503df72d7be521501a82b7e7b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:38.587104 env[1222]: time="2024-02-09T09:43:38.587075735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\" returns image reference \"sha256:9c9318f5fbf505fc3d84676966009a3887e58ea1e3eac10039e5a96dfceb254b\"" Feb 9 09:43:38.589254 env[1222]: time="2024-02-09T09:43:38.589208033Z" level=info msg="CreateContainer within sandbox \"06deff8d02ee7e3b471dd928c1a3b9d1393da647bdaa42c60f748a162bdccbd9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 9 09:43:38.599970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1396147950.mount: Deactivated successfully. Feb 9 09:43:38.603279 env[1222]: time="2024-02-09T09:43:38.603239686Z" level=info msg="CreateContainer within sandbox \"06deff8d02ee7e3b471dd928c1a3b9d1393da647bdaa42c60f748a162bdccbd9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ef6c8ebf5cb799b59150e445535c89faf1ece3b97af94d0f76149ffd254617b0\"" Feb 9 09:43:38.603923 env[1222]: time="2024-02-09T09:43:38.603878680Z" level=info msg="StartContainer for \"ef6c8ebf5cb799b59150e445535c89faf1ece3b97af94d0f76149ffd254617b0\"" Feb 9 09:43:38.816833 env[1222]: time="2024-02-09T09:43:38.816770021Z" level=info msg="StartContainer for \"ef6c8ebf5cb799b59150e445535c89faf1ece3b97af94d0f76149ffd254617b0\" returns successfully" Feb 9 09:43:38.916433 kubelet[2163]: E0209 09:43:38.916168 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:39.383664 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef6c8ebf5cb799b59150e445535c89faf1ece3b97af94d0f76149ffd254617b0-rootfs.mount: Deactivated successfully. Feb 9 09:43:39.387251 env[1222]: time="2024-02-09T09:43:39.387206135Z" level=info msg="shim disconnected" id=ef6c8ebf5cb799b59150e445535c89faf1ece3b97af94d0f76149ffd254617b0 Feb 9 09:43:39.387390 env[1222]: time="2024-02-09T09:43:39.387252534Z" level=warning msg="cleaning up after shim disconnected" id=ef6c8ebf5cb799b59150e445535c89faf1ece3b97af94d0f76149ffd254617b0 namespace=k8s.io Feb 9 09:43:39.387390 env[1222]: time="2024-02-09T09:43:39.387264614Z" level=info msg="cleaning up dead shim" Feb 9 09:43:39.394229 env[1222]: time="2024-02-09T09:43:39.394175985Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:43:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3029 runtime=io.containerd.runc.v2\n" Feb 9 09:43:39.413325 kubelet[2163]: I0209 09:43:39.413275 2163 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 09:43:39.434775 kubelet[2163]: I0209 09:43:39.434732 2163 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:43:39.435078 kubelet[2163]: I0209 09:43:39.435055 2163 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:43:39.435864 kubelet[2163]: I0209 09:43:39.435826 2163 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:43:39.595735 kubelet[2163]: I0209 09:43:39.595704 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j89md\" (UniqueName: \"kubernetes.io/projected/2e9d6543-4145-4cad-b789-3d4de6afdcd4-kube-api-access-j89md\") pod \"coredns-787d4945fb-8jgqw\" (UID: \"2e9d6543-4145-4cad-b789-3d4de6afdcd4\") " pod="kube-system/coredns-787d4945fb-8jgqw" Feb 9 09:43:39.596446 kubelet[2163]: I0209 09:43:39.596412 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fztc4\" (UniqueName: \"kubernetes.io/projected/c7231818-222f-4202-aba4-773c0e8d3a5b-kube-api-access-fztc4\") pod \"coredns-787d4945fb-4skrb\" (UID: \"c7231818-222f-4202-aba4-773c0e8d3a5b\") " pod="kube-system/coredns-787d4945fb-4skrb" Feb 9 09:43:39.596541 kubelet[2163]: I0209 09:43:39.596470 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7231818-222f-4202-aba4-773c0e8d3a5b-config-volume\") pod \"coredns-787d4945fb-4skrb\" (UID: \"c7231818-222f-4202-aba4-773c0e8d3a5b\") " pod="kube-system/coredns-787d4945fb-4skrb" Feb 9 09:43:39.596541 kubelet[2163]: I0209 09:43:39.596497 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e9d6543-4145-4cad-b789-3d4de6afdcd4-config-volume\") pod \"coredns-787d4945fb-8jgqw\" (UID: \"2e9d6543-4145-4cad-b789-3d4de6afdcd4\") " pod="kube-system/coredns-787d4945fb-8jgqw" Feb 9 09:43:39.596541 kubelet[2163]: I0209 09:43:39.596541 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bldw8\" (UniqueName: \"kubernetes.io/projected/74acb228-0bbd-4d33-80b3-a534d2c83208-kube-api-access-bldw8\") pod \"calico-kube-controllers-5d664cd787-78hlq\" (UID: \"74acb228-0bbd-4d33-80b3-a534d2c83208\") " pod="calico-system/calico-kube-controllers-5d664cd787-78hlq" Feb 9 09:43:39.596629 kubelet[2163]: I0209 09:43:39.596568 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74acb228-0bbd-4d33-80b3-a534d2c83208-tigera-ca-bundle\") pod \"calico-kube-controllers-5d664cd787-78hlq\" (UID: \"74acb228-0bbd-4d33-80b3-a534d2c83208\") " pod="calico-system/calico-kube-controllers-5d664cd787-78hlq" Feb 9 09:43:39.744457 kubelet[2163]: E0209 09:43:39.744419 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:39.744626 kubelet[2163]: E0209 09:43:39.744610 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:39.744920 env[1222]: time="2024-02-09T09:43:39.744879736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-8jgqw,Uid:2e9d6543-4145-4cad-b789-3d4de6afdcd4,Namespace:kube-system,Attempt:0,}" Feb 9 09:43:39.745861 env[1222]: time="2024-02-09T09:43:39.745825766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-4skrb,Uid:c7231818-222f-4202-aba4-773c0e8d3a5b,Namespace:kube-system,Attempt:0,}" Feb 9 09:43:39.745963 env[1222]: time="2024-02-09T09:43:39.745861806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d664cd787-78hlq,Uid:74acb228-0bbd-4d33-80b3-a534d2c83208,Namespace:calico-system,Attempt:0,}" Feb 9 09:43:39.855862 env[1222]: time="2024-02-09T09:43:39.854606631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dvjfd,Uid:440d5d1f-6bf0-4b9d-b4ee-c3a791745bc9,Namespace:calico-system,Attempt:0,}" Feb 9 09:43:39.881049 env[1222]: time="2024-02-09T09:43:39.880967886Z" level=error msg="Failed to destroy network for sandbox \"e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:43:39.881378 env[1222]: time="2024-02-09T09:43:39.881344602Z" level=error msg="encountered an error cleaning up failed sandbox \"e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:43:39.881450 env[1222]: time="2024-02-09T09:43:39.881385442Z" level=error msg="Failed to destroy network for sandbox \"e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:43:39.881519 env[1222]: time="2024-02-09T09:43:39.881401042Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-4skrb,Uid:c7231818-222f-4202-aba4-773c0e8d3a5b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:43:39.881792 env[1222]: time="2024-02-09T09:43:39.881745198Z" level=error msg="encountered an error cleaning up failed sandbox \"e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:43:39.881894 env[1222]: time="2024-02-09T09:43:39.881793238Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d664cd787-78hlq,Uid:74acb228-0bbd-4d33-80b3-a534d2c83208,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:43:39.882032 kubelet[2163]: E0209 09:43:39.881995 2163 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:43:39.882098 kubelet[2163]: E0209 09:43:39.882072 2163 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d664cd787-78hlq" Feb 9 09:43:39.882602 kubelet[2163]: E0209 09:43:39.882009 2163 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:43:39.882602 kubelet[2163]: E0209 09:43:39.882265 2163 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-4skrb" Feb 9 09:43:39.882602 kubelet[2163]: E0209 09:43:39.882274 2163 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d664cd787-78hlq" Feb 9 09:43:39.882602 kubelet[2163]: E0209 09:43:39.882309 2163 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-4skrb" Feb 9 09:43:39.882789 kubelet[2163]: E0209 09:43:39.882353 2163 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-4skrb_kube-system(c7231818-222f-4202-aba4-773c0e8d3a5b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-4skrb_kube-system(c7231818-222f-4202-aba4-773c0e8d3a5b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-4skrb" podUID=c7231818-222f-4202-aba4-773c0e8d3a5b Feb 9 09:43:39.882789 kubelet[2163]: E0209 09:43:39.882368 2163 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5d664cd787-78hlq_calico-system(74acb228-0bbd-4d33-80b3-a534d2c83208)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5d664cd787-78hlq_calico-system(74acb228-0bbd-4d33-80b3-a534d2c83208)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d664cd787-78hlq" podUID=74acb228-0bbd-4d33-80b3-a534d2c83208 Feb 9 09:43:39.893459 env[1222]: time="2024-02-09T09:43:39.893404121Z" level=error msg="Failed to destroy network for sandbox \"820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:43:39.893753 env[1222]: time="2024-02-09T09:43:39.893723118Z" level=error msg="encountered an error cleaning up failed sandbox \"820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:43:39.893831 env[1222]: time="2024-02-09T09:43:39.893769277Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-8jgqw,Uid:2e9d6543-4145-4cad-b789-3d4de6afdcd4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:43:39.894324 kubelet[2163]: E0209 09:43:39.893967 2163 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:43:39.894324 kubelet[2163]: E0209 09:43:39.894020 2163 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-8jgqw" Feb 9 09:43:39.894324 kubelet[2163]: E0209 09:43:39.894055 2163 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-8jgqw" Feb 9 09:43:39.894449 kubelet[2163]: E0209 09:43:39.894103 2163 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-8jgqw_kube-system(2e9d6543-4145-4cad-b789-3d4de6afdcd4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-8jgqw_kube-system(2e9d6543-4145-4cad-b789-3d4de6afdcd4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-8jgqw" podUID=2e9d6543-4145-4cad-b789-3d4de6afdcd4 Feb 9 09:43:39.919745 kubelet[2163]: E0209 09:43:39.919589 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:39.920927 env[1222]: time="2024-02-09T09:43:39.920892804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\"" Feb 9 09:43:39.923118 kubelet[2163]: I0209 09:43:39.923091 2163 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c" Feb 9 09:43:39.923969 env[1222]: time="2024-02-09T09:43:39.923928134Z" level=info msg="StopPodSandbox for \"820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c\"" Feb 9 09:43:39.924919 kubelet[2163]: I0209 09:43:39.924901 2163 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4" Feb 9 09:43:39.925296 env[1222]: time="2024-02-09T09:43:39.925261640Z" level=info msg="StopPodSandbox for \"e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4\"" Feb 9 09:43:39.925765 kubelet[2163]: I0209 09:43:39.925740 2163 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607" Feb 9 09:43:39.926142 env[1222]: time="2024-02-09T09:43:39.926111152Z" level=info msg="StopPodSandbox for \"e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607\"" Feb 9 09:43:39.945124 env[1222]: time="2024-02-09T09:43:39.945067401Z" level=error msg="Failed to destroy network for sandbox \"ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:43:39.945498 env[1222]: time="2024-02-09T09:43:39.945459997Z" level=error msg="encountered an error cleaning up failed sandbox \"ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:43:39.945561 env[1222]: time="2024-02-09T09:43:39.945511477Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dvjfd,Uid:440d5d1f-6bf0-4b9d-b4ee-c3a791745bc9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:43:39.946082 kubelet[2163]: E0209 09:43:39.945718 2163 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:43:39.946082 kubelet[2163]: E0209 09:43:39.945777 2163 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dvjfd" Feb 9 09:43:39.946082 kubelet[2163]: E0209 09:43:39.945801 2163 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dvjfd" Feb 9 09:43:39.946247 kubelet[2163]: E0209 09:43:39.945860 2163 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dvjfd_calico-system(440d5d1f-6bf0-4b9d-b4ee-c3a791745bc9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dvjfd_calico-system(440d5d1f-6bf0-4b9d-b4ee-c3a791745bc9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dvjfd" podUID=440d5d1f-6bf0-4b9d-b4ee-c3a791745bc9 Feb 9 09:43:39.960554 env[1222]: time="2024-02-09T09:43:39.960477206Z" level=error msg="StopPodSandbox for \"e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607\" failed" error="failed to destroy network for sandbox \"e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:43:39.960689 env[1222]: time="2024-02-09T09:43:39.960485686Z" level=error msg="StopPodSandbox for \"e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4\" failed" error="failed to destroy network for sandbox \"e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:43:39.961107 kubelet[2163]: E0209 09:43:39.960933 2163 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4" Feb 9 09:43:39.961107 kubelet[2163]: E0209 09:43:39.960990 2163 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4} Feb 9 09:43:39.961107 kubelet[2163]: E0209 09:43:39.960937 2163 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607" Feb 9 09:43:39.961107 kubelet[2163]: E0209 09:43:39.961040 2163 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c7231818-222f-4202-aba4-773c0e8d3a5b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 09:43:39.961107 kubelet[2163]: E0209 09:43:39.961050 2163 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607} Feb 9 09:43:39.961330 kubelet[2163]: E0209 09:43:39.961080 2163 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"74acb228-0bbd-4d33-80b3-a534d2c83208\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 09:43:39.961330 kubelet[2163]: E0209 09:43:39.961127 2163 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"74acb228-0bbd-4d33-80b3-a534d2c83208\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d664cd787-78hlq" podUID=74acb228-0bbd-4d33-80b3-a534d2c83208 Feb 9 09:43:39.961330 kubelet[2163]: E0209 09:43:39.961081 2163 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c7231818-222f-4202-aba4-773c0e8d3a5b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-4skrb" podUID=c7231818-222f-4202-aba4-773c0e8d3a5b Feb 9 09:43:39.964066 env[1222]: time="2024-02-09T09:43:39.964021610Z" level=error msg="StopPodSandbox for \"820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c\" failed" error="failed to destroy network for sandbox \"820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:43:39.964441 kubelet[2163]: E0209 09:43:39.964324 2163 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c" Feb 9 09:43:39.964441 kubelet[2163]: E0209 09:43:39.964353 2163 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c} Feb 9 09:43:39.964441 kubelet[2163]: E0209 09:43:39.964397 2163 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2e9d6543-4145-4cad-b789-3d4de6afdcd4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 09:43:39.964441 kubelet[2163]: E0209 09:43:39.964424 2163 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2e9d6543-4145-4cad-b789-3d4de6afdcd4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-8jgqw" podUID=2e9d6543-4145-4cad-b789-3d4de6afdcd4 Feb 9 09:43:40.710560 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4-shm.mount: Deactivated successfully. Feb 9 09:43:40.710708 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c-shm.mount: Deactivated successfully. Feb 9 09:43:40.928203 kubelet[2163]: I0209 09:43:40.928168 2163 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479" Feb 9 09:43:40.928918 env[1222]: time="2024-02-09T09:43:40.928875934Z" level=info msg="StopPodSandbox for \"ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479\"" Feb 9 09:43:40.951098 env[1222]: time="2024-02-09T09:43:40.951044719Z" level=error msg="StopPodSandbox for \"ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479\" failed" error="failed to destroy network for sandbox \"ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:43:40.951530 kubelet[2163]: E0209 09:43:40.951487 2163 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479" Feb 9 09:43:40.951608 kubelet[2163]: E0209 09:43:40.951537 2163 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479} Feb 9 09:43:40.951608 kubelet[2163]: E0209 09:43:40.951571 2163 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"440d5d1f-6bf0-4b9d-b4ee-c3a791745bc9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 09:43:40.951608 kubelet[2163]: E0209 09:43:40.951597 2163 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"440d5d1f-6bf0-4b9d-b4ee-c3a791745bc9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dvjfd" podUID=440d5d1f-6bf0-4b9d-b4ee-c3a791745bc9 Feb 9 09:43:45.192503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1971428786.mount: Deactivated successfully. Feb 9 09:43:45.530047 env[1222]: time="2024-02-09T09:43:45.529941370Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:45.531330 env[1222]: time="2024-02-09T09:43:45.531267599Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c445639cb28807ced09724016dc3b273b170b14d3b3d0c39b1affa1cc6b68774,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:45.532858 env[1222]: time="2024-02-09T09:43:45.532831466Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:45.533941 env[1222]: time="2024-02-09T09:43:45.533907377Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:a45dffb21a0e9ca8962f36359a2ab776beeecd93843543c2fa1745d7bbb0f754,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:45.534537 env[1222]: time="2024-02-09T09:43:45.534511132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\" returns image reference \"sha256:c445639cb28807ced09724016dc3b273b170b14d3b3d0c39b1affa1cc6b68774\"" Feb 9 09:43:45.548101 env[1222]: time="2024-02-09T09:43:45.548056739Z" level=info msg="CreateContainer within sandbox \"06deff8d02ee7e3b471dd928c1a3b9d1393da647bdaa42c60f748a162bdccbd9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 9 09:43:45.559111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3695683915.mount: Deactivated successfully. Feb 9 09:43:45.563708 env[1222]: time="2024-02-09T09:43:45.563635689Z" level=info msg="CreateContainer within sandbox \"06deff8d02ee7e3b471dd928c1a3b9d1393da647bdaa42c60f748a162bdccbd9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e1809aa34b13a292b82632c46f8b77ef5b2341754b7cfca1f0742e195c58324c\"" Feb 9 09:43:45.564461 env[1222]: time="2024-02-09T09:43:45.564416443Z" level=info msg="StartContainer for \"e1809aa34b13a292b82632c46f8b77ef5b2341754b7cfca1f0742e195c58324c\"" Feb 9 09:43:45.650690 env[1222]: time="2024-02-09T09:43:45.650620965Z" level=info msg="StartContainer for \"e1809aa34b13a292b82632c46f8b77ef5b2341754b7cfca1f0742e195c58324c\" returns successfully" Feb 9 09:43:45.786363 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 9 09:43:45.786495 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 9 09:43:45.939053 kubelet[2163]: E0209 09:43:45.939006 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:46.940195 kubelet[2163]: I0209 09:43:46.940154 2163 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 9 09:43:46.940961 kubelet[2163]: E0209 09:43:46.940931 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:47.103093 kernel: audit: type=1400 audit(1707471827.097:280): avc: denied { write } for pid=3427 comm="tee" name="fd" dev="proc" ino=20545 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 09:43:47.103208 kernel: audit: type=1300 audit(1707471827.097:280): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffdc299995 a2=241 a3=1b6 items=1 ppid=3378 pid=3427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:47.097000 audit[3427]: AVC avc: denied { write } for pid=3427 comm="tee" name="fd" dev="proc" ino=20545 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 09:43:47.097000 audit[3427]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffdc299995 a2=241 a3=1b6 items=1 ppid=3378 pid=3427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:47.097000 audit: CWD cwd="/etc/service/enabled/cni/log" Feb 9 09:43:47.104418 kernel: audit: type=1307 audit(1707471827.097:280): cwd="/etc/service/enabled/cni/log" Feb 9 09:43:47.097000 audit: PATH item=0 name="/dev/fd/63" inode=19580 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:43:47.106346 kernel: audit: type=1302 audit(1707471827.097:280): item=0 name="/dev/fd/63" inode=19580 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:43:47.097000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 09:43:47.107974 kernel: audit: type=1327 audit(1707471827.097:280): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 09:43:47.099000 audit[3431]: AVC avc: denied { write } for pid=3431 comm="tee" name="fd" dev="proc" ino=19586 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 09:43:47.109853 kernel: audit: type=1400 audit(1707471827.099:281): avc: denied { write } for pid=3431 comm="tee" name="fd" dev="proc" ino=19586 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 09:43:47.099000 audit[3431]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe6dd1993 a2=241 a3=1b6 items=1 ppid=3372 pid=3431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:47.115282 kernel: audit: type=1300 audit(1707471827.099:281): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe6dd1993 a2=241 a3=1b6 items=1 ppid=3372 pid=3431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:47.099000 audit: CWD cwd="/etc/service/enabled/bird6/log" Feb 9 09:43:47.117332 kernel: audit: type=1307 audit(1707471827.099:281): cwd="/etc/service/enabled/bird6/log" Feb 9 09:43:47.099000 audit: PATH item=0 name="/dev/fd/63" inode=19583 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:43:47.119320 kernel: audit: type=1302 audit(1707471827.099:281): item=0 name="/dev/fd/63" inode=19583 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:43:47.099000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 09:43:47.122069 kernel: audit: type=1327 audit(1707471827.099:281): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 09:43:47.102000 audit[3434]: AVC avc: denied { write } for pid=3434 comm="tee" name="fd" dev="proc" ino=19590 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 09:43:47.102000 audit[3434]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff0c43983 a2=241 a3=1b6 items=1 ppid=3370 pid=3434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:47.102000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 9 09:43:47.102000 audit: PATH item=0 name="/dev/fd/63" inode=19110 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:43:47.102000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 09:43:47.108000 audit[3422]: AVC avc: denied { write } for pid=3422 comm="tee" name="fd" dev="proc" ino=19594 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 09:43:47.108000 audit[3422]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffdce12993 a2=241 a3=1b6 items=1 ppid=3366 pid=3422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:47.108000 audit: CWD cwd="/etc/service/enabled/felix/log" Feb 9 09:43:47.108000 audit: PATH item=0 name="/dev/fd/63" inode=18338 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:43:47.108000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 09:43:47.110000 audit[3424]: AVC avc: denied { write } for pid=3424 comm="tee" name="fd" dev="proc" ino=19598 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 09:43:47.110000 audit[3424]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffde1eb984 a2=241 a3=1b6 items=1 ppid=3369 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:47.110000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Feb 9 09:43:47.110000 audit: PATH item=0 name="/dev/fd/63" inode=18341 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:43:47.110000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 09:43:47.127000 audit[3438]: AVC avc: denied { write } for pid=3438 comm="tee" name="fd" dev="proc" ino=18344 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 09:43:47.127000 audit[3438]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc987d993 a2=241 a3=1b6 items=1 ppid=3371 pid=3438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:47.127000 audit: CWD cwd="/etc/service/enabled/confd/log" Feb 9 09:43:47.127000 audit: PATH item=0 name="/dev/fd/63" inode=19111 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:43:47.127000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 09:43:47.130000 audit[3446]: AVC avc: denied { write } for pid=3446 comm="tee" name="fd" dev="proc" ino=19604 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 09:43:47.130000 audit[3446]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffed83a994 a2=241 a3=1b6 items=1 ppid=3365 pid=3446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:47.130000 audit: CWD cwd="/etc/service/enabled/bird/log" Feb 9 09:43:47.130000 audit: PATH item=0 name="/dev/fd/63" inode=20553 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:43:47.130000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 09:43:47.336000 audit[3515]: AVC avc: denied { bpf } for pid=3515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:47.336000 audit[3515]: AVC avc: denied { bpf } for pid=3515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:47.336000 audit[3515]: AVC avc: denied { perfmon } for pid=3515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:47.336000 audit[3515]: AVC avc: denied { perfmon } for pid=3515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:47.336000 audit[3515]: AVC avc: denied { perfmon } for pid=3515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:47.336000 audit[3515]: AVC avc: denied { perfmon } for pid=3515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:47.336000 audit[3515]: AVC avc: denied { perfmon } for pid=3515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:47.336000 audit[3515]: AVC avc: denied { bpf } for pid=3515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:47.336000 audit[3515]: AVC avc: denied { bpf } for pid=3515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:47.336000 audit: BPF prog-id=10 op=LOAD Feb 9 09:43:47.336000 audit[3515]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc224b918 a2=70 a3=0 items=0 ppid=3367 pid=3515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:47.336000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 09:43:47.336000 audit: BPF prog-id=10 op=UNLOAD Feb 9 09:43:47.336000 audit[3515]: AVC avc: denied { bpf } for pid=3515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:47.336000 audit[3515]: AVC avc: denied { bpf } for pid=3515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:47.336000 audit[3515]: AVC avc: denied { perfmon } for pid=3515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:47.336000 audit[3515]: AVC avc: denied { perfmon } for pid=3515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:47.336000 audit[3515]: AVC avc: denied { perfmon } for pid=3515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:47.336000 audit[3515]: AVC avc: denied { perfmon } for pid=3515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:47.336000 audit[3515]: AVC avc: denied { perfmon } for pid=3515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:47.336000 audit[3515]: AVC avc: denied { bpf } for pid=3515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:47.336000 audit[3515]: AVC avc: denied { bpf } for pid=3515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:47.336000 audit: BPF prog-id=11 op=LOAD Feb 9 09:43:47.336000 audit[3515]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc224b918 a2=70 a3=4a174c items=0 ppid=3367 pid=3515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:47.336000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 09:43:47.336000 audit: BPF prog-id=11 op=UNLOAD Feb 9 09:43:47.336000 audit[3515]: AVC avc: denied { bpf } for pid=3515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:47.336000 audit[3515]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=0 a1=ffffc224b948 a2=70 a3=2580079f items=0 ppid=3367 pid=3515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:47.336000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 09:43:47.338000 audit[3515]: AVC avc: denied { bpf } for pid=3515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:47.338000 audit[3515]: AVC avc: denied { bpf } for pid=3515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:47.338000 audit[3515]: AVC avc: denied { bpf } for pid=3515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:47.338000 audit[3515]: AVC avc: denied { perfmon } for pid=3515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:47.338000 audit[3515]: AVC avc: denied { perfmon } for pid=3515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:47.338000 audit[3515]: AVC avc: denied { perfmon } for pid=3515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:47.338000 audit[3515]: AVC avc: denied { perfmon } for pid=3515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:47.338000 audit[3515]: AVC avc: denied { perfmon } for pid=3515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:47.338000 audit[3515]: AVC avc: denied { bpf } for pid=3515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:47.338000 audit[3515]: AVC avc: denied { bpf } for pid=3515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:47.338000 audit: BPF prog-id=12 op=LOAD Feb 9 09:43:47.338000 audit[3515]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc224b898 a2=70 a3=258007b9 items=0 ppid=3367 pid=3515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:47.338000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 09:43:47.342000 audit[3519]: AVC avc: denied { bpf } for pid=3519 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:47.342000 audit[3519]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffddc86148 a2=70 a3=0 items=0 ppid=3367 pid=3519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:47.342000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 9 09:43:47.342000 audit[3519]: AVC avc: denied { bpf } for pid=3519 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:43:47.342000 audit[3519]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffddc86028 a2=70 a3=2 items=0 ppid=3367 pid=3519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:47.342000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 9 09:43:47.353000 audit: BPF prog-id=12 op=UNLOAD Feb 9 09:43:47.386000 audit[3546]: NETFILTER_CFG table=mangle:111 family=2 entries=19 op=nft_register_chain pid=3546 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 09:43:47.386000 audit[3546]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6800 a0=3 a1=ffffc84b1bf0 a2=0 a3=ffff8bb49fa8 items=0 ppid=3367 pid=3546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:47.386000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 09:43:47.390000 audit[3545]: NETFILTER_CFG table=raw:112 family=2 entries=19 op=nft_register_chain pid=3545 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 09:43:47.390000 audit[3545]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6132 a0=3 a1=ffffec5a8d40 a2=0 a3=ffff8ae37fa8 items=0 ppid=3367 pid=3545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:47.390000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 09:43:47.394000 audit[3549]: NETFILTER_CFG table=nat:113 family=2 entries=16 op=nft_register_chain pid=3549 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 09:43:47.394000 audit[3549]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5188 a0=3 a1=ffffeb049470 a2=0 a3=ffff9dcebfa8 items=0 ppid=3367 pid=3549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:47.394000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 09:43:47.398000 audit[3548]: NETFILTER_CFG table=filter:114 family=2 entries=39 op=nft_register_chain pid=3548 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 09:43:47.398000 audit[3548]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=18472 a0=3 a1=ffffe65ad8a0 a2=0 a3=ffff9a53afa8 items=0 ppid=3367 pid=3548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:47.398000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 09:43:48.230118 systemd-networkd[1095]: vxlan.calico: Link UP Feb 9 09:43:48.230124 systemd-networkd[1095]: vxlan.calico: Gained carrier Feb 9 09:43:49.272418 systemd-networkd[1095]: vxlan.calico: Gained IPv6LL Feb 9 09:43:50.849432 env[1222]: time="2024-02-09T09:43:50.849366534Z" level=info msg="StopPodSandbox for \"820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c\"" Feb 9 09:43:50.852374 env[1222]: time="2024-02-09T09:43:50.850423527Z" level=info msg="StopPodSandbox for \"e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607\"" Feb 9 09:43:50.977988 kubelet[2163]: I0209 09:43:50.977942 2163 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-wb9kl" podStartSLOduration=-9.223372014876875e+09 pod.CreationTimestamp="2024-02-09 09:43:29 +0000 UTC" firstStartedPulling="2024-02-09 09:43:30.805702763 +0000 UTC m=+21.098483174" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:43:45.953269843 +0000 UTC m=+36.246050214" watchObservedRunningTime="2024-02-09 09:43:50.977901313 +0000 UTC m=+41.270681764" Feb 9 09:43:51.114614 env[1222]: 2024-02-09 09:43:50.980 [INFO][3594] k8s.go 578: Cleaning up netns ContainerID="e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607" Feb 9 09:43:51.114614 env[1222]: 2024-02-09 09:43:50.981 [INFO][3594] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607" iface="eth0" netns="/var/run/netns/cni-86415aa6-e6ea-7226-d26d-f2716566c173" Feb 9 09:43:51.114614 env[1222]: 2024-02-09 09:43:50.981 [INFO][3594] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607" iface="eth0" netns="/var/run/netns/cni-86415aa6-e6ea-7226-d26d-f2716566c173" Feb 9 09:43:51.114614 env[1222]: 2024-02-09 09:43:50.981 [INFO][3594] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607" iface="eth0" netns="/var/run/netns/cni-86415aa6-e6ea-7226-d26d-f2716566c173" Feb 9 09:43:51.114614 env[1222]: 2024-02-09 09:43:50.981 [INFO][3594] k8s.go 585: Releasing IP address(es) ContainerID="e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607" Feb 9 09:43:51.114614 env[1222]: 2024-02-09 09:43:50.981 [INFO][3594] utils.go 188: Calico CNI releasing IP address ContainerID="e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607" Feb 9 09:43:51.114614 env[1222]: 2024-02-09 09:43:51.093 [INFO][3610] ipam_plugin.go 415: Releasing address using handleID ContainerID="e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607" HandleID="k8s-pod-network.e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607" Workload="localhost-k8s-calico--kube--controllers--5d664cd787--78hlq-eth0" Feb 9 09:43:51.114614 env[1222]: 2024-02-09 09:43:51.094 [INFO][3610] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:43:51.114614 env[1222]: 2024-02-09 09:43:51.094 [INFO][3610] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:43:51.114614 env[1222]: 2024-02-09 09:43:51.105 [WARNING][3610] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607" HandleID="k8s-pod-network.e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607" Workload="localhost-k8s-calico--kube--controllers--5d664cd787--78hlq-eth0" Feb 9 09:43:51.114614 env[1222]: 2024-02-09 09:43:51.105 [INFO][3610] ipam_plugin.go 443: Releasing address using workloadID ContainerID="e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607" HandleID="k8s-pod-network.e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607" Workload="localhost-k8s-calico--kube--controllers--5d664cd787--78hlq-eth0" Feb 9 09:43:51.114614 env[1222]: 2024-02-09 09:43:51.106 [INFO][3610] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:43:51.114614 env[1222]: 2024-02-09 09:43:51.111 [INFO][3594] k8s.go 591: Teardown processing complete. ContainerID="e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607" Feb 9 09:43:51.114614 env[1222]: time="2024-02-09T09:43:51.113723457Z" level=info msg="TearDown network for sandbox \"e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607\" successfully" Feb 9 09:43:51.114614 env[1222]: time="2024-02-09T09:43:51.113756737Z" level=info msg="StopPodSandbox for \"e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607\" returns successfully" Feb 9 09:43:51.114614 env[1222]: time="2024-02-09T09:43:51.114467452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d664cd787-78hlq,Uid:74acb228-0bbd-4d33-80b3-a534d2c83208,Namespace:calico-system,Attempt:1,}" Feb 9 09:43:51.115930 systemd[1]: run-netns-cni\x2d86415aa6\x2de6ea\x2d7226\x2dd26d\x2df2716566c173.mount: Deactivated successfully. Feb 9 09:43:51.122274 env[1222]: 2024-02-09 09:43:50.979 [INFO][3595] k8s.go 578: Cleaning up netns ContainerID="820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c" Feb 9 09:43:51.122274 env[1222]: 2024-02-09 09:43:50.979 [INFO][3595] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c" iface="eth0" netns="/var/run/netns/cni-ec44b793-a0d9-8c42-75d0-1b30a84e29b0" Feb 9 09:43:51.122274 env[1222]: 2024-02-09 09:43:50.980 [INFO][3595] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c" iface="eth0" netns="/var/run/netns/cni-ec44b793-a0d9-8c42-75d0-1b30a84e29b0" Feb 9 09:43:51.122274 env[1222]: 2024-02-09 09:43:50.980 [INFO][3595] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c" iface="eth0" netns="/var/run/netns/cni-ec44b793-a0d9-8c42-75d0-1b30a84e29b0" Feb 9 09:43:51.122274 env[1222]: 2024-02-09 09:43:50.980 [INFO][3595] k8s.go 585: Releasing IP address(es) ContainerID="820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c" Feb 9 09:43:51.122274 env[1222]: 2024-02-09 09:43:50.980 [INFO][3595] utils.go 188: Calico CNI releasing IP address ContainerID="820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c" Feb 9 09:43:51.122274 env[1222]: 2024-02-09 09:43:51.093 [INFO][3609] ipam_plugin.go 415: Releasing address using handleID ContainerID="820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c" HandleID="k8s-pod-network.820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c" Workload="localhost-k8s-coredns--787d4945fb--8jgqw-eth0" Feb 9 09:43:51.122274 env[1222]: 2024-02-09 09:43:51.094 [INFO][3609] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:43:51.122274 env[1222]: 2024-02-09 09:43:51.106 [INFO][3609] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:43:51.122274 env[1222]: 2024-02-09 09:43:51.116 [WARNING][3609] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c" HandleID="k8s-pod-network.820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c" Workload="localhost-k8s-coredns--787d4945fb--8jgqw-eth0" Feb 9 09:43:51.122274 env[1222]: 2024-02-09 09:43:51.116 [INFO][3609] ipam_plugin.go 443: Releasing address using workloadID ContainerID="820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c" HandleID="k8s-pod-network.820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c" Workload="localhost-k8s-coredns--787d4945fb--8jgqw-eth0" Feb 9 09:43:51.122274 env[1222]: 2024-02-09 09:43:51.118 [INFO][3609] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:43:51.122274 env[1222]: 2024-02-09 09:43:51.120 [INFO][3595] k8s.go 591: Teardown processing complete. ContainerID="820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c" Feb 9 09:43:51.122681 env[1222]: time="2024-02-09T09:43:51.122448195Z" level=info msg="TearDown network for sandbox \"820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c\" successfully" Feb 9 09:43:51.122681 env[1222]: time="2024-02-09T09:43:51.122472075Z" level=info msg="StopPodSandbox for \"820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c\" returns successfully" Feb 9 09:43:51.122729 kubelet[2163]: E0209 09:43:51.122698 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:51.124405 env[1222]: time="2024-02-09T09:43:51.124372261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-8jgqw,Uid:2e9d6543-4145-4cad-b789-3d4de6afdcd4,Namespace:kube-system,Attempt:1,}" Feb 9 09:43:51.124612 systemd[1]: run-netns-cni\x2dec44b793\x2da0d9\x2d8c42\x2d75d0\x2d1b30a84e29b0.mount: Deactivated successfully. Feb 9 09:43:51.256293 systemd-networkd[1095]: cali7f8e52e54c4: Link UP Feb 9 09:43:51.258922 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:43:51.259055 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali7f8e52e54c4: link becomes ready Feb 9 09:43:51.259088 systemd-networkd[1095]: cali7f8e52e54c4: Gained carrier Feb 9 09:43:51.272142 env[1222]: 2024-02-09 09:43:51.169 [INFO][3624] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5d664cd787--78hlq-eth0 calico-kube-controllers-5d664cd787- calico-system 74acb228-0bbd-4d33-80b3-a534d2c83208 701 0 2024-02-09 09:43:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5d664cd787 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5d664cd787-78hlq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7f8e52e54c4 [] []}} ContainerID="37f03f2ae789feaa8764797cacb4ca9cc377713a5c99f9dbe7a3b1eacbb8e145" Namespace="calico-system" Pod="calico-kube-controllers-5d664cd787-78hlq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d664cd787--78hlq-" Feb 9 09:43:51.272142 env[1222]: 2024-02-09 09:43:51.170 [INFO][3624] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="37f03f2ae789feaa8764797cacb4ca9cc377713a5c99f9dbe7a3b1eacbb8e145" Namespace="calico-system" Pod="calico-kube-controllers-5d664cd787-78hlq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d664cd787--78hlq-eth0" Feb 9 09:43:51.272142 env[1222]: 2024-02-09 09:43:51.201 [INFO][3651] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="37f03f2ae789feaa8764797cacb4ca9cc377713a5c99f9dbe7a3b1eacbb8e145" HandleID="k8s-pod-network.37f03f2ae789feaa8764797cacb4ca9cc377713a5c99f9dbe7a3b1eacbb8e145" Workload="localhost-k8s-calico--kube--controllers--5d664cd787--78hlq-eth0" Feb 9 09:43:51.272142 env[1222]: 2024-02-09 09:43:51.222 [INFO][3651] ipam_plugin.go 268: Auto assigning IP ContainerID="37f03f2ae789feaa8764797cacb4ca9cc377713a5c99f9dbe7a3b1eacbb8e145" HandleID="k8s-pod-network.37f03f2ae789feaa8764797cacb4ca9cc377713a5c99f9dbe7a3b1eacbb8e145" Workload="localhost-k8s-calico--kube--controllers--5d664cd787--78hlq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400029cfe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5d664cd787-78hlq", "timestamp":"2024-02-09 09:43:51.20142711 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 09:43:51.272142 env[1222]: 2024-02-09 09:43:51.222 [INFO][3651] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:43:51.272142 env[1222]: 2024-02-09 09:43:51.222 [INFO][3651] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:43:51.272142 env[1222]: 2024-02-09 09:43:51.222 [INFO][3651] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 9 09:43:51.272142 env[1222]: 2024-02-09 09:43:51.223 [INFO][3651] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.37f03f2ae789feaa8764797cacb4ca9cc377713a5c99f9dbe7a3b1eacbb8e145" host="localhost" Feb 9 09:43:51.272142 env[1222]: 2024-02-09 09:43:51.228 [INFO][3651] ipam.go 372: Looking up existing affinities for host host="localhost" Feb 9 09:43:51.272142 env[1222]: 2024-02-09 09:43:51.231 [INFO][3651] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 9 09:43:51.272142 env[1222]: 2024-02-09 09:43:51.233 [INFO][3651] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 9 09:43:51.272142 env[1222]: 2024-02-09 09:43:51.235 [INFO][3651] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 9 09:43:51.272142 env[1222]: 2024-02-09 09:43:51.235 [INFO][3651] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.37f03f2ae789feaa8764797cacb4ca9cc377713a5c99f9dbe7a3b1eacbb8e145" host="localhost" Feb 9 09:43:51.272142 env[1222]: 2024-02-09 09:43:51.237 [INFO][3651] ipam.go 1682: Creating new handle: k8s-pod-network.37f03f2ae789feaa8764797cacb4ca9cc377713a5c99f9dbe7a3b1eacbb8e145 Feb 9 09:43:51.272142 env[1222]: 2024-02-09 09:43:51.239 [INFO][3651] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.37f03f2ae789feaa8764797cacb4ca9cc377713a5c99f9dbe7a3b1eacbb8e145" host="localhost" Feb 9 09:43:51.272142 env[1222]: 2024-02-09 09:43:51.244 [INFO][3651] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.37f03f2ae789feaa8764797cacb4ca9cc377713a5c99f9dbe7a3b1eacbb8e145" host="localhost" Feb 9 09:43:51.272142 env[1222]: 2024-02-09 09:43:51.244 [INFO][3651] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.37f03f2ae789feaa8764797cacb4ca9cc377713a5c99f9dbe7a3b1eacbb8e145" host="localhost" Feb 9 09:43:51.272142 env[1222]: 2024-02-09 09:43:51.244 [INFO][3651] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:43:51.272142 env[1222]: 2024-02-09 09:43:51.244 [INFO][3651] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="37f03f2ae789feaa8764797cacb4ca9cc377713a5c99f9dbe7a3b1eacbb8e145" HandleID="k8s-pod-network.37f03f2ae789feaa8764797cacb4ca9cc377713a5c99f9dbe7a3b1eacbb8e145" Workload="localhost-k8s-calico--kube--controllers--5d664cd787--78hlq-eth0" Feb 9 09:43:51.272721 env[1222]: 2024-02-09 09:43:51.246 [INFO][3624] k8s.go 385: Populated endpoint ContainerID="37f03f2ae789feaa8764797cacb4ca9cc377713a5c99f9dbe7a3b1eacbb8e145" Namespace="calico-system" Pod="calico-kube-controllers-5d664cd787-78hlq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d664cd787--78hlq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5d664cd787--78hlq-eth0", GenerateName:"calico-kube-controllers-5d664cd787-", Namespace:"calico-system", SelfLink:"", UID:"74acb228-0bbd-4d33-80b3-a534d2c83208", ResourceVersion:"701", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 43, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d664cd787", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5d664cd787-78hlq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7f8e52e54c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:43:51.272721 env[1222]: 2024-02-09 09:43:51.246 [INFO][3624] k8s.go 386: Calico CNI using IPs: [192.168.88.129/32] ContainerID="37f03f2ae789feaa8764797cacb4ca9cc377713a5c99f9dbe7a3b1eacbb8e145" Namespace="calico-system" Pod="calico-kube-controllers-5d664cd787-78hlq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d664cd787--78hlq-eth0" Feb 9 09:43:51.272721 env[1222]: 2024-02-09 09:43:51.246 [INFO][3624] dataplane_linux.go 68: Setting the host side veth name to cali7f8e52e54c4 ContainerID="37f03f2ae789feaa8764797cacb4ca9cc377713a5c99f9dbe7a3b1eacbb8e145" Namespace="calico-system" Pod="calico-kube-controllers-5d664cd787-78hlq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d664cd787--78hlq-eth0" Feb 9 09:43:51.272721 env[1222]: 2024-02-09 09:43:51.256 [INFO][3624] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="37f03f2ae789feaa8764797cacb4ca9cc377713a5c99f9dbe7a3b1eacbb8e145" Namespace="calico-system" Pod="calico-kube-controllers-5d664cd787-78hlq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d664cd787--78hlq-eth0" Feb 9 09:43:51.272721 env[1222]: 2024-02-09 09:43:51.257 [INFO][3624] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="37f03f2ae789feaa8764797cacb4ca9cc377713a5c99f9dbe7a3b1eacbb8e145" Namespace="calico-system" Pod="calico-kube-controllers-5d664cd787-78hlq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d664cd787--78hlq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5d664cd787--78hlq-eth0", GenerateName:"calico-kube-controllers-5d664cd787-", Namespace:"calico-system", SelfLink:"", UID:"74acb228-0bbd-4d33-80b3-a534d2c83208", ResourceVersion:"701", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 43, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d664cd787", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"37f03f2ae789feaa8764797cacb4ca9cc377713a5c99f9dbe7a3b1eacbb8e145", Pod:"calico-kube-controllers-5d664cd787-78hlq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7f8e52e54c4", MAC:"e6:5c:fc:57:16:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:43:51.272721 env[1222]: 2024-02-09 09:43:51.268 [INFO][3624] k8s.go 491: Wrote updated endpoint to datastore ContainerID="37f03f2ae789feaa8764797cacb4ca9cc377713a5c99f9dbe7a3b1eacbb8e145" Namespace="calico-system" Pod="calico-kube-controllers-5d664cd787-78hlq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d664cd787--78hlq-eth0" Feb 9 09:43:51.286079 systemd-networkd[1095]: cali95e16740f6a: Link UP Feb 9 09:43:51.287491 systemd-networkd[1095]: cali95e16740f6a: Gained carrier Feb 9 09:43:51.290329 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali95e16740f6a: link becomes ready Feb 9 09:43:51.297472 env[1222]: 2024-02-09 09:43:51.169 [INFO][3634] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--787d4945fb--8jgqw-eth0 coredns-787d4945fb- kube-system 2e9d6543-4145-4cad-b789-3d4de6afdcd4 702 0 2024-02-09 09:43:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-787d4945fb-8jgqw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali95e16740f6a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c3d1c92e6731ad84479c4900a9bf7a32ca3612e5fbb76857a0978d1b31ee609e" Namespace="kube-system" Pod="coredns-787d4945fb-8jgqw" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--8jgqw-" Feb 9 09:43:51.297472 env[1222]: 2024-02-09 09:43:51.170 [INFO][3634] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="c3d1c92e6731ad84479c4900a9bf7a32ca3612e5fbb76857a0978d1b31ee609e" Namespace="kube-system" Pod="coredns-787d4945fb-8jgqw" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--8jgqw-eth0" Feb 9 09:43:51.297472 env[1222]: 2024-02-09 09:43:51.209 [INFO][3652] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c3d1c92e6731ad84479c4900a9bf7a32ca3612e5fbb76857a0978d1b31ee609e" HandleID="k8s-pod-network.c3d1c92e6731ad84479c4900a9bf7a32ca3612e5fbb76857a0978d1b31ee609e" Workload="localhost-k8s-coredns--787d4945fb--8jgqw-eth0" Feb 9 09:43:51.297472 env[1222]: 2024-02-09 09:43:51.224 [INFO][3652] ipam_plugin.go 268: Auto assigning IP ContainerID="c3d1c92e6731ad84479c4900a9bf7a32ca3612e5fbb76857a0978d1b31ee609e" HandleID="k8s-pod-network.c3d1c92e6731ad84479c4900a9bf7a32ca3612e5fbb76857a0978d1b31ee609e" Workload="localhost-k8s-coredns--787d4945fb--8jgqw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d8f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-787d4945fb-8jgqw", "timestamp":"2024-02-09 09:43:51.209017975 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 09:43:51.297472 env[1222]: 2024-02-09 09:43:51.224 [INFO][3652] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:43:51.297472 env[1222]: 2024-02-09 09:43:51.244 [INFO][3652] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:43:51.297472 env[1222]: 2024-02-09 09:43:51.244 [INFO][3652] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 9 09:43:51.297472 env[1222]: 2024-02-09 09:43:51.246 [INFO][3652] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c3d1c92e6731ad84479c4900a9bf7a32ca3612e5fbb76857a0978d1b31ee609e" host="localhost" Feb 9 09:43:51.297472 env[1222]: 2024-02-09 09:43:51.251 [INFO][3652] ipam.go 372: Looking up existing affinities for host host="localhost" Feb 9 09:43:51.297472 env[1222]: 2024-02-09 09:43:51.263 [INFO][3652] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 9 09:43:51.297472 env[1222]: 2024-02-09 09:43:51.266 [INFO][3652] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 9 09:43:51.297472 env[1222]: 2024-02-09 09:43:51.269 [INFO][3652] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 9 09:43:51.297472 env[1222]: 2024-02-09 09:43:51.269 [INFO][3652] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c3d1c92e6731ad84479c4900a9bf7a32ca3612e5fbb76857a0978d1b31ee609e" host="localhost" Feb 9 09:43:51.297472 env[1222]: 2024-02-09 09:43:51.271 [INFO][3652] ipam.go 1682: Creating new handle: k8s-pod-network.c3d1c92e6731ad84479c4900a9bf7a32ca3612e5fbb76857a0978d1b31ee609e Feb 9 09:43:51.297472 env[1222]: 2024-02-09 09:43:51.276 [INFO][3652] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c3d1c92e6731ad84479c4900a9bf7a32ca3612e5fbb76857a0978d1b31ee609e" host="localhost" Feb 9 09:43:51.297472 env[1222]: 2024-02-09 09:43:51.280 [INFO][3652] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.c3d1c92e6731ad84479c4900a9bf7a32ca3612e5fbb76857a0978d1b31ee609e" host="localhost" Feb 9 09:43:51.297472 env[1222]: 2024-02-09 09:43:51.280 [INFO][3652] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.c3d1c92e6731ad84479c4900a9bf7a32ca3612e5fbb76857a0978d1b31ee609e" host="localhost" Feb 9 09:43:51.297472 env[1222]: 2024-02-09 09:43:51.280 [INFO][3652] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:43:51.297472 env[1222]: 2024-02-09 09:43:51.280 [INFO][3652] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="c3d1c92e6731ad84479c4900a9bf7a32ca3612e5fbb76857a0978d1b31ee609e" HandleID="k8s-pod-network.c3d1c92e6731ad84479c4900a9bf7a32ca3612e5fbb76857a0978d1b31ee609e" Workload="localhost-k8s-coredns--787d4945fb--8jgqw-eth0" Feb 9 09:43:51.296000 audit[3686]: NETFILTER_CFG table=filter:115 family=2 entries=36 op=nft_register_chain pid=3686 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 09:43:51.296000 audit[3686]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19908 a0=3 a1=ffffd2073050 a2=0 a3=ffff9d602fa8 items=0 ppid=3367 pid=3686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:51.296000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 09:43:51.299539 env[1222]: 2024-02-09 09:43:51.284 [INFO][3634] k8s.go 385: Populated endpoint ContainerID="c3d1c92e6731ad84479c4900a9bf7a32ca3612e5fbb76857a0978d1b31ee609e" Namespace="kube-system" Pod="coredns-787d4945fb-8jgqw" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--8jgqw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--8jgqw-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"2e9d6543-4145-4cad-b789-3d4de6afdcd4", ResourceVersion:"702", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 43, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-787d4945fb-8jgqw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali95e16740f6a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:43:51.299539 env[1222]: 2024-02-09 09:43:51.284 [INFO][3634] k8s.go 386: Calico CNI using IPs: [192.168.88.130/32] ContainerID="c3d1c92e6731ad84479c4900a9bf7a32ca3612e5fbb76857a0978d1b31ee609e" Namespace="kube-system" Pod="coredns-787d4945fb-8jgqw" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--8jgqw-eth0" Feb 9 09:43:51.299539 env[1222]: 2024-02-09 09:43:51.284 [INFO][3634] dataplane_linux.go 68: Setting the host side veth name to cali95e16740f6a ContainerID="c3d1c92e6731ad84479c4900a9bf7a32ca3612e5fbb76857a0978d1b31ee609e" Namespace="kube-system" Pod="coredns-787d4945fb-8jgqw" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--8jgqw-eth0" Feb 9 09:43:51.299539 env[1222]: 2024-02-09 09:43:51.286 [INFO][3634] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c3d1c92e6731ad84479c4900a9bf7a32ca3612e5fbb76857a0978d1b31ee609e" Namespace="kube-system" Pod="coredns-787d4945fb-8jgqw" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--8jgqw-eth0" Feb 9 09:43:51.299539 env[1222]: 2024-02-09 09:43:51.287 [INFO][3634] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="c3d1c92e6731ad84479c4900a9bf7a32ca3612e5fbb76857a0978d1b31ee609e" Namespace="kube-system" Pod="coredns-787d4945fb-8jgqw" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--8jgqw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--8jgqw-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"2e9d6543-4145-4cad-b789-3d4de6afdcd4", ResourceVersion:"702", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 43, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c3d1c92e6731ad84479c4900a9bf7a32ca3612e5fbb76857a0978d1b31ee609e", Pod:"coredns-787d4945fb-8jgqw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali95e16740f6a", MAC:"3a:16:31:e7:c1:85", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:43:51.299539 env[1222]: 2024-02-09 09:43:51.295 [INFO][3634] k8s.go 491: Wrote updated endpoint to datastore ContainerID="c3d1c92e6731ad84479c4900a9bf7a32ca3612e5fbb76857a0978d1b31ee609e" Namespace="kube-system" Pod="coredns-787d4945fb-8jgqw" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--8jgqw-eth0" Feb 9 09:43:51.300650 env[1222]: time="2024-02-09T09:43:51.300590800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:43:51.300650 env[1222]: time="2024-02-09T09:43:51.300638560Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:43:51.300773 env[1222]: time="2024-02-09T09:43:51.300653639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:43:51.300843 env[1222]: time="2024-02-09T09:43:51.300802838Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/37f03f2ae789feaa8764797cacb4ca9cc377713a5c99f9dbe7a3b1eacbb8e145 pid=3694 runtime=io.containerd.runc.v2 Feb 9 09:43:51.320799 env[1222]: time="2024-02-09T09:43:51.320722976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:43:51.319000 audit[3739]: NETFILTER_CFG table=filter:116 family=2 entries=40 op=nft_register_chain pid=3739 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 09:43:51.319000 audit[3739]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21096 a0=3 a1=fffff686aae0 a2=0 a3=ffff8e3d0fa8 items=0 ppid=3367 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:51.319000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 09:43:51.321186 env[1222]: time="2024-02-09T09:43:51.320763735Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:43:51.321186 env[1222]: time="2024-02-09T09:43:51.320775535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:43:51.321290 env[1222]: time="2024-02-09T09:43:51.321204372Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c3d1c92e6731ad84479c4900a9bf7a32ca3612e5fbb76857a0978d1b31ee609e pid=3733 runtime=io.containerd.runc.v2 Feb 9 09:43:51.339989 systemd-resolved[1155]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 09:43:51.348402 systemd-resolved[1155]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 09:43:51.361460 env[1222]: time="2024-02-09T09:43:51.361418765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d664cd787-78hlq,Uid:74acb228-0bbd-4d33-80b3-a534d2c83208,Namespace:calico-system,Attempt:1,} returns sandbox id \"37f03f2ae789feaa8764797cacb4ca9cc377713a5c99f9dbe7a3b1eacbb8e145\"" Feb 9 09:43:51.367252 env[1222]: time="2024-02-09T09:43:51.365208817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\"" Feb 9 09:43:51.374828 env[1222]: time="2024-02-09T09:43:51.374797389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-8jgqw,Uid:2e9d6543-4145-4cad-b789-3d4de6afdcd4,Namespace:kube-system,Attempt:1,} returns sandbox id \"c3d1c92e6731ad84479c4900a9bf7a32ca3612e5fbb76857a0978d1b31ee609e\"" Feb 9 09:43:51.375700 kubelet[2163]: E0209 09:43:51.375682 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:51.377921 env[1222]: time="2024-02-09T09:43:51.377885287Z" level=info msg="CreateContainer within sandbox \"c3d1c92e6731ad84479c4900a9bf7a32ca3612e5fbb76857a0978d1b31ee609e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 09:43:51.387281 env[1222]: time="2024-02-09T09:43:51.387242700Z" level=info msg="CreateContainer within sandbox \"c3d1c92e6731ad84479c4900a9bf7a32ca3612e5fbb76857a0978d1b31ee609e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b12ecbc7b7a9cbc1f0c022792935866a0aa5d0f3b6a12129cfdf05a8eeafcebd\"" Feb 9 09:43:51.388115 env[1222]: time="2024-02-09T09:43:51.388087854Z" level=info msg="StartContainer for \"b12ecbc7b7a9cbc1f0c022792935866a0aa5d0f3b6a12129cfdf05a8eeafcebd\"" Feb 9 09:43:51.434810 env[1222]: time="2024-02-09T09:43:51.434766760Z" level=info msg="StartContainer for \"b12ecbc7b7a9cbc1f0c022792935866a0aa5d0f3b6a12129cfdf05a8eeafcebd\" returns successfully" Feb 9 09:43:51.849578 env[1222]: time="2024-02-09T09:43:51.849519391Z" level=info msg="StopPodSandbox for \"e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4\"" Feb 9 09:43:51.937842 env[1222]: 2024-02-09 09:43:51.899 [INFO][3842] k8s.go 578: Cleaning up netns ContainerID="e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4" Feb 9 09:43:51.937842 env[1222]: 2024-02-09 09:43:51.899 [INFO][3842] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4" iface="eth0" netns="/var/run/netns/cni-239c5243-f42f-c972-bd44-611c35e8dadb" Feb 9 09:43:51.937842 env[1222]: 2024-02-09 09:43:51.900 [INFO][3842] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4" iface="eth0" netns="/var/run/netns/cni-239c5243-f42f-c972-bd44-611c35e8dadb" Feb 9 09:43:51.937842 env[1222]: 2024-02-09 09:43:51.901 [INFO][3842] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4" iface="eth0" netns="/var/run/netns/cni-239c5243-f42f-c972-bd44-611c35e8dadb" Feb 9 09:43:51.937842 env[1222]: 2024-02-09 09:43:51.901 [INFO][3842] k8s.go 585: Releasing IP address(es) ContainerID="e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4" Feb 9 09:43:51.937842 env[1222]: 2024-02-09 09:43:51.901 [INFO][3842] utils.go 188: Calico CNI releasing IP address ContainerID="e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4" Feb 9 09:43:51.937842 env[1222]: 2024-02-09 09:43:51.921 [INFO][3850] ipam_plugin.go 415: Releasing address using handleID ContainerID="e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4" HandleID="k8s-pod-network.e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4" Workload="localhost-k8s-coredns--787d4945fb--4skrb-eth0" Feb 9 09:43:51.937842 env[1222]: 2024-02-09 09:43:51.921 [INFO][3850] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:43:51.937842 env[1222]: 2024-02-09 09:43:51.921 [INFO][3850] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:43:51.937842 env[1222]: 2024-02-09 09:43:51.933 [WARNING][3850] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4" HandleID="k8s-pod-network.e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4" Workload="localhost-k8s-coredns--787d4945fb--4skrb-eth0" Feb 9 09:43:51.937842 env[1222]: 2024-02-09 09:43:51.933 [INFO][3850] ipam_plugin.go 443: Releasing address using workloadID ContainerID="e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4" HandleID="k8s-pod-network.e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4" Workload="localhost-k8s-coredns--787d4945fb--4skrb-eth0" Feb 9 09:43:51.937842 env[1222]: 2024-02-09 09:43:51.934 [INFO][3850] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:43:51.937842 env[1222]: 2024-02-09 09:43:51.936 [INFO][3842] k8s.go 591: Teardown processing complete. ContainerID="e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4" Feb 9 09:43:51.938761 env[1222]: time="2024-02-09T09:43:51.938716873Z" level=info msg="TearDown network for sandbox \"e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4\" successfully" Feb 9 09:43:51.938856 env[1222]: time="2024-02-09T09:43:51.938838752Z" level=info msg="StopPodSandbox for \"e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4\" returns successfully" Feb 9 09:43:51.939397 kubelet[2163]: E0209 09:43:51.939202 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:51.940557 env[1222]: time="2024-02-09T09:43:51.940518260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-4skrb,Uid:c7231818-222f-4202-aba4-773c0e8d3a5b,Namespace:kube-system,Attempt:1,}" Feb 9 09:43:51.952908 kubelet[2163]: E0209 09:43:51.952324 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:51.965852 kubelet[2163]: I0209 09:43:51.964421 2163 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-8jgqw" podStartSLOduration=27.964388929 pod.CreationTimestamp="2024-02-09 09:43:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:43:51.963932613 +0000 UTC m=+42.256713064" watchObservedRunningTime="2024-02-09 09:43:51.964388929 +0000 UTC m=+42.257169300" Feb 9 09:43:52.027000 audit[3906]: NETFILTER_CFG table=filter:117 family=2 entries=9 op=nft_register_rule pid=3906 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:43:52.027000 audit[3906]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=fffffe0b7210 a2=0 a3=ffff8cef16c0 items=0 ppid=2347 pid=3906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:52.027000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:43:52.029000 audit[3906]: NETFILTER_CFG table=nat:118 family=2 entries=51 op=nft_register_chain pid=3906 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:43:52.029000 audit[3906]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19324 a0=3 a1=fffffe0b7210 a2=0 a3=ffff8cef16c0 items=0 ppid=2347 pid=3906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:52.029000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:43:52.070337 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali66d522360bb: link becomes ready Feb 9 09:43:52.071541 systemd-networkd[1095]: cali66d522360bb: Link UP Feb 9 09:43:52.072266 systemd-networkd[1095]: cali66d522360bb: Gained carrier Feb 9 09:43:52.080844 env[1222]: 2024-02-09 09:43:52.001 [INFO][3859] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--787d4945fb--4skrb-eth0 coredns-787d4945fb- kube-system c7231818-222f-4202-aba4-773c0e8d3a5b 718 0 2024-02-09 09:43:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-787d4945fb-4skrb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali66d522360bb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="7ac6c465d5c7370897ebb1d9a18b9f5e4ba76a193e5068b18e0f17c7d0dd7e43" Namespace="kube-system" Pod="coredns-787d4945fb-4skrb" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--4skrb-" Feb 9 09:43:52.080844 env[1222]: 2024-02-09 09:43:52.002 [INFO][3859] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="7ac6c465d5c7370897ebb1d9a18b9f5e4ba76a193e5068b18e0f17c7d0dd7e43" Namespace="kube-system" Pod="coredns-787d4945fb-4skrb" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--4skrb-eth0" Feb 9 09:43:52.080844 env[1222]: 2024-02-09 09:43:52.028 [INFO][3888] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7ac6c465d5c7370897ebb1d9a18b9f5e4ba76a193e5068b18e0f17c7d0dd7e43" HandleID="k8s-pod-network.7ac6c465d5c7370897ebb1d9a18b9f5e4ba76a193e5068b18e0f17c7d0dd7e43" Workload="localhost-k8s-coredns--787d4945fb--4skrb-eth0" Feb 9 09:43:52.080844 env[1222]: 2024-02-09 09:43:52.041 [INFO][3888] ipam_plugin.go 268: Auto assigning IP ContainerID="7ac6c465d5c7370897ebb1d9a18b9f5e4ba76a193e5068b18e0f17c7d0dd7e43" HandleID="k8s-pod-network.7ac6c465d5c7370897ebb1d9a18b9f5e4ba76a193e5068b18e0f17c7d0dd7e43" Workload="localhost-k8s-coredns--787d4945fb--4skrb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004df30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-787d4945fb-4skrb", "timestamp":"2024-02-09 09:43:52.028878952 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 09:43:52.080844 env[1222]: 2024-02-09 09:43:52.041 [INFO][3888] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:43:52.080844 env[1222]: 2024-02-09 09:43:52.041 [INFO][3888] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:43:52.080844 env[1222]: 2024-02-09 09:43:52.041 [INFO][3888] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 9 09:43:52.080844 env[1222]: 2024-02-09 09:43:52.042 [INFO][3888] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7ac6c465d5c7370897ebb1d9a18b9f5e4ba76a193e5068b18e0f17c7d0dd7e43" host="localhost" Feb 9 09:43:52.080844 env[1222]: 2024-02-09 09:43:52.047 [INFO][3888] ipam.go 372: Looking up existing affinities for host host="localhost" Feb 9 09:43:52.080844 env[1222]: 2024-02-09 09:43:52.050 [INFO][3888] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 9 09:43:52.080844 env[1222]: 2024-02-09 09:43:52.052 [INFO][3888] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 9 09:43:52.080844 env[1222]: 2024-02-09 09:43:52.054 [INFO][3888] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 9 09:43:52.080844 env[1222]: 2024-02-09 09:43:52.054 [INFO][3888] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7ac6c465d5c7370897ebb1d9a18b9f5e4ba76a193e5068b18e0f17c7d0dd7e43" host="localhost" Feb 9 09:43:52.080844 env[1222]: 2024-02-09 09:43:52.057 [INFO][3888] ipam.go 1682: Creating new handle: k8s-pod-network.7ac6c465d5c7370897ebb1d9a18b9f5e4ba76a193e5068b18e0f17c7d0dd7e43 Feb 9 09:43:52.080844 env[1222]: 2024-02-09 09:43:52.060 [INFO][3888] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7ac6c465d5c7370897ebb1d9a18b9f5e4ba76a193e5068b18e0f17c7d0dd7e43" host="localhost" Feb 9 09:43:52.080844 env[1222]: 2024-02-09 09:43:52.064 [INFO][3888] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.7ac6c465d5c7370897ebb1d9a18b9f5e4ba76a193e5068b18e0f17c7d0dd7e43" host="localhost" Feb 9 09:43:52.080844 env[1222]: 2024-02-09 09:43:52.064 [INFO][3888] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.7ac6c465d5c7370897ebb1d9a18b9f5e4ba76a193e5068b18e0f17c7d0dd7e43" host="localhost" Feb 9 09:43:52.080844 env[1222]: 2024-02-09 09:43:52.064 [INFO][3888] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:43:52.080844 env[1222]: 2024-02-09 09:43:52.064 [INFO][3888] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="7ac6c465d5c7370897ebb1d9a18b9f5e4ba76a193e5068b18e0f17c7d0dd7e43" HandleID="k8s-pod-network.7ac6c465d5c7370897ebb1d9a18b9f5e4ba76a193e5068b18e0f17c7d0dd7e43" Workload="localhost-k8s-coredns--787d4945fb--4skrb-eth0" Feb 9 09:43:52.081495 env[1222]: 2024-02-09 09:43:52.066 [INFO][3859] k8s.go 385: Populated endpoint ContainerID="7ac6c465d5c7370897ebb1d9a18b9f5e4ba76a193e5068b18e0f17c7d0dd7e43" Namespace="kube-system" Pod="coredns-787d4945fb-4skrb" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--4skrb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--4skrb-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"c7231818-222f-4202-aba4-773c0e8d3a5b", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 43, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-787d4945fb-4skrb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali66d522360bb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:43:52.081495 env[1222]: 2024-02-09 09:43:52.066 [INFO][3859] k8s.go 386: Calico CNI using IPs: [192.168.88.131/32] ContainerID="7ac6c465d5c7370897ebb1d9a18b9f5e4ba76a193e5068b18e0f17c7d0dd7e43" Namespace="kube-system" Pod="coredns-787d4945fb-4skrb" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--4skrb-eth0" Feb 9 09:43:52.081495 env[1222]: 2024-02-09 09:43:52.066 [INFO][3859] dataplane_linux.go 68: Setting the host side veth name to cali66d522360bb ContainerID="7ac6c465d5c7370897ebb1d9a18b9f5e4ba76a193e5068b18e0f17c7d0dd7e43" Namespace="kube-system" Pod="coredns-787d4945fb-4skrb" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--4skrb-eth0" Feb 9 09:43:52.081495 env[1222]: 2024-02-09 09:43:52.068 [INFO][3859] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="7ac6c465d5c7370897ebb1d9a18b9f5e4ba76a193e5068b18e0f17c7d0dd7e43" Namespace="kube-system" Pod="coredns-787d4945fb-4skrb" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--4skrb-eth0" Feb 9 09:43:52.081495 env[1222]: 2024-02-09 09:43:52.069 [INFO][3859] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="7ac6c465d5c7370897ebb1d9a18b9f5e4ba76a193e5068b18e0f17c7d0dd7e43" Namespace="kube-system" Pod="coredns-787d4945fb-4skrb" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--4skrb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--4skrb-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"c7231818-222f-4202-aba4-773c0e8d3a5b", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 43, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7ac6c465d5c7370897ebb1d9a18b9f5e4ba76a193e5068b18e0f17c7d0dd7e43", Pod:"coredns-787d4945fb-4skrb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali66d522360bb", MAC:"f6:2c:80:b1:29:af", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:43:52.081495 env[1222]: 2024-02-09 09:43:52.079 [INFO][3859] k8s.go 491: Wrote updated endpoint to datastore ContainerID="7ac6c465d5c7370897ebb1d9a18b9f5e4ba76a193e5068b18e0f17c7d0dd7e43" Namespace="kube-system" Pod="coredns-787d4945fb-4skrb" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--4skrb-eth0" Feb 9 09:43:52.082000 audit[3935]: NETFILTER_CFG table=filter:119 family=2 entries=6 op=nft_register_rule pid=3935 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:43:52.082000 audit[3935]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffd0dd18a0 a2=0 a3=ffff9930b6c0 items=0 ppid=2347 pid=3935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:52.082000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:43:52.083000 audit[3935]: NETFILTER_CFG table=nat:120 family=2 entries=60 op=nft_register_rule pid=3935 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:43:52.083000 audit[3935]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19324 a0=3 a1=ffffd0dd18a0 a2=0 a3=ffff9930b6c0 items=0 ppid=2347 pid=3935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:52.083000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:43:52.091000 audit[3948]: NETFILTER_CFG table=filter:121 family=2 entries=34 op=nft_register_chain pid=3948 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 09:43:52.091000 audit[3948]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=17900 a0=3 a1=ffffc65e9a90 a2=0 a3=ffffba503fa8 items=0 ppid=3367 pid=3948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:52.091000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 09:43:52.098770 env[1222]: time="2024-02-09T09:43:52.098699623Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:43:52.098916 env[1222]: time="2024-02-09T09:43:52.098752383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:43:52.098916 env[1222]: time="2024-02-09T09:43:52.098889862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:43:52.099182 env[1222]: time="2024-02-09T09:43:52.099149020Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ac6c465d5c7370897ebb1d9a18b9f5e4ba76a193e5068b18e0f17c7d0dd7e43 pid=3957 runtime=io.containerd.runc.v2 Feb 9 09:43:52.119342 systemd[1]: run-netns-cni\x2d239c5243\x2df42f\x2dc972\x2dbd44\x2d611c35e8dadb.mount: Deactivated successfully. Feb 9 09:43:52.132674 systemd-resolved[1155]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 09:43:52.150398 env[1222]: time="2024-02-09T09:43:52.150359982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-4skrb,Uid:c7231818-222f-4202-aba4-773c0e8d3a5b,Namespace:kube-system,Attempt:1,} returns sandbox id \"7ac6c465d5c7370897ebb1d9a18b9f5e4ba76a193e5068b18e0f17c7d0dd7e43\"" Feb 9 09:43:52.151049 kubelet[2163]: E0209 09:43:52.151029 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:52.153287 env[1222]: time="2024-02-09T09:43:52.153252681Z" level=info msg="CreateContainer within sandbox \"7ac6c465d5c7370897ebb1d9a18b9f5e4ba76a193e5068b18e0f17c7d0dd7e43\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 09:43:52.162797 env[1222]: time="2024-02-09T09:43:52.162758095Z" level=info msg="CreateContainer within sandbox \"7ac6c465d5c7370897ebb1d9a18b9f5e4ba76a193e5068b18e0f17c7d0dd7e43\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9752b1113dae1f17189b5d3fafcfa5b3af2623e3dd1aa5c683ccedde04a7e35f\"" Feb 9 09:43:52.164570 env[1222]: time="2024-02-09T09:43:52.163406130Z" level=info msg="StartContainer for \"9752b1113dae1f17189b5d3fafcfa5b3af2623e3dd1aa5c683ccedde04a7e35f\"" Feb 9 09:43:52.212700 env[1222]: time="2024-02-09T09:43:52.212435187Z" level=info msg="StartContainer for \"9752b1113dae1f17189b5d3fafcfa5b3af2623e3dd1aa5c683ccedde04a7e35f\" returns successfully" Feb 9 09:43:52.920464 systemd-networkd[1095]: cali7f8e52e54c4: Gained IPv6LL Feb 9 09:43:52.921035 systemd-networkd[1095]: cali95e16740f6a: Gained IPv6LL Feb 9 09:43:52.955535 kubelet[2163]: E0209 09:43:52.955511 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:52.955668 kubelet[2163]: E0209 09:43:52.955569 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:53.121639 kernel: kauditd_printk_skb: 107 callbacks suppressed Feb 9 09:43:53.121738 kernel: audit: type=1325 audit(1707471833.118:307): table=filter:122 family=2 entries=6 op=nft_register_rule pid=4059 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:43:53.118000 audit[4059]: NETFILTER_CFG table=filter:122 family=2 entries=6 op=nft_register_rule pid=4059 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:43:53.124718 kernel: audit: type=1300 audit(1707471833.118:307): arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=fffff293e3b0 a2=0 a3=ffff975d26c0 items=0 ppid=2347 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:53.118000 audit[4059]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=fffff293e3b0 a2=0 a3=ffff975d26c0 items=0 ppid=2347 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:53.118000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:43:53.130328 kernel: audit: type=1327 audit(1707471833.118:307): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:43:53.130000 audit[4059]: NETFILTER_CFG table=nat:123 family=2 entries=60 op=nft_register_rule pid=4059 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:43:53.130000 audit[4059]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19324 a0=3 a1=fffff293e3b0 a2=0 a3=ffff975d26c0 items=0 ppid=2347 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:53.136259 kernel: audit: type=1325 audit(1707471833.130:308): table=nat:123 family=2 entries=60 op=nft_register_rule pid=4059 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:43:53.136344 kernel: audit: type=1300 audit(1707471833.130:308): arch=c00000b7 syscall=211 success=yes exit=19324 a0=3 a1=fffff293e3b0 a2=0 a3=ffff975d26c0 items=0 ppid=2347 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:53.136377 kernel: audit: type=1327 audit(1707471833.130:308): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:43:53.130000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:43:53.443992 env[1222]: time="2024-02-09T09:43:53.443929469Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:53.445233 env[1222]: time="2024-02-09T09:43:53.445198341Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:094645649618376e48b5ec13a94a164d53dbdf819b7ab644f080b751f24560c8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:53.446601 env[1222]: time="2024-02-09T09:43:53.446562531Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:53.451251 env[1222]: time="2024-02-09T09:43:53.451208139Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:e264ab1fb2f1ae90dd1d84e226d11d2eb4350e74ac27de4c65f29f5aadba5bb1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:53.451800 env[1222]: time="2024-02-09T09:43:53.451750456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\" returns image reference \"sha256:094645649618376e48b5ec13a94a164d53dbdf819b7ab644f080b751f24560c8\"" Feb 9 09:43:53.462434 env[1222]: time="2024-02-09T09:43:53.461860906Z" level=info msg="CreateContainer within sandbox \"37f03f2ae789feaa8764797cacb4ca9cc377713a5c99f9dbe7a3b1eacbb8e145\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 9 09:43:53.473252 env[1222]: time="2024-02-09T09:43:53.473198029Z" level=info msg="CreateContainer within sandbox \"37f03f2ae789feaa8764797cacb4ca9cc377713a5c99f9dbe7a3b1eacbb8e145\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5921f6f62ac7019ae7b389075404f5812dd72a9e92a8d6388bc0b1d218a425b8\"" Feb 9 09:43:53.474031 env[1222]: time="2024-02-09T09:43:53.473998583Z" level=info msg="StartContainer for \"5921f6f62ac7019ae7b389075404f5812dd72a9e92a8d6388bc0b1d218a425b8\"" Feb 9 09:43:53.561545 systemd-networkd[1095]: cali66d522360bb: Gained IPv6LL Feb 9 09:43:53.564751 env[1222]: time="2024-02-09T09:43:53.564714921Z" level=info msg="StartContainer for \"5921f6f62ac7019ae7b389075404f5812dd72a9e92a8d6388bc0b1d218a425b8\" returns successfully" Feb 9 09:43:53.958182 kubelet[2163]: E0209 09:43:53.958139 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:53.959617 kubelet[2163]: E0209 09:43:53.959591 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:53.969161 kubelet[2163]: I0209 09:43:53.969105 2163 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5d664cd787-78hlq" podStartSLOduration=-9.2233720118857e+09 pod.CreationTimestamp="2024-02-09 09:43:29 +0000 UTC" firstStartedPulling="2024-02-09 09:43:51.36484842 +0000 UTC m=+41.657628791" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:43:53.968194555 +0000 UTC m=+44.260974966" watchObservedRunningTime="2024-02-09 09:43:53.969075709 +0000 UTC m=+44.261856120" Feb 9 09:43:53.969319 kubelet[2163]: I0209 09:43:53.969181 2163 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-4skrb" podStartSLOduration=29.969164748 pod.CreationTimestamp="2024-02-09 09:43:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:43:52.966523307 +0000 UTC m=+43.259303718" watchObservedRunningTime="2024-02-09 09:43:53.969164748 +0000 UTC m=+44.261945159" Feb 9 09:43:54.014000 audit[4122]: NETFILTER_CFG table=filter:124 family=2 entries=6 op=nft_register_rule pid=4122 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:43:54.014000 audit[4122]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffd78c8400 a2=0 a3=ffffae5096c0 items=0 ppid=2347 pid=4122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:54.020641 kernel: audit: type=1325 audit(1707471834.014:309): table=filter:124 family=2 entries=6 op=nft_register_rule pid=4122 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:43:54.020727 kernel: audit: type=1300 audit(1707471834.014:309): arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffd78c8400 a2=0 a3=ffffae5096c0 items=0 ppid=2347 pid=4122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:54.020758 kernel: audit: type=1327 audit(1707471834.014:309): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:43:54.014000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:43:54.020000 audit[4122]: NETFILTER_CFG table=nat:125 family=2 entries=72 op=nft_register_chain pid=4122 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:43:54.020000 audit[4122]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffd78c8400 a2=0 a3=ffffae5096c0 items=0 ppid=2347 pid=4122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:54.020000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:43:54.026318 kernel: audit: type=1325 audit(1707471834.020:310): table=nat:125 family=2 entries=72 op=nft_register_chain pid=4122 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:43:54.848778 env[1222]: time="2024-02-09T09:43:54.848734874Z" level=info msg="StopPodSandbox for \"ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479\"" Feb 9 09:43:54.938999 env[1222]: 2024-02-09 09:43:54.900 [INFO][4143] k8s.go 578: Cleaning up netns ContainerID="ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479" Feb 9 09:43:54.938999 env[1222]: 2024-02-09 09:43:54.901 [INFO][4143] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479" iface="eth0" netns="/var/run/netns/cni-964528ac-4275-efd9-d3b4-5ec43539f13b" Feb 9 09:43:54.938999 env[1222]: 2024-02-09 09:43:54.901 [INFO][4143] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479" iface="eth0" netns="/var/run/netns/cni-964528ac-4275-efd9-d3b4-5ec43539f13b" Feb 9 09:43:54.938999 env[1222]: 2024-02-09 09:43:54.901 [INFO][4143] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479" iface="eth0" netns="/var/run/netns/cni-964528ac-4275-efd9-d3b4-5ec43539f13b" Feb 9 09:43:54.938999 env[1222]: 2024-02-09 09:43:54.901 [INFO][4143] k8s.go 585: Releasing IP address(es) ContainerID="ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479" Feb 9 09:43:54.938999 env[1222]: 2024-02-09 09:43:54.901 [INFO][4143] utils.go 188: Calico CNI releasing IP address ContainerID="ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479" Feb 9 09:43:54.938999 env[1222]: 2024-02-09 09:43:54.918 [INFO][4150] ipam_plugin.go 415: Releasing address using handleID ContainerID="ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479" HandleID="k8s-pod-network.ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479" Workload="localhost-k8s-csi--node--driver--dvjfd-eth0" Feb 9 09:43:54.938999 env[1222]: 2024-02-09 09:43:54.918 [INFO][4150] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:43:54.938999 env[1222]: 2024-02-09 09:43:54.918 [INFO][4150] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:43:54.938999 env[1222]: 2024-02-09 09:43:54.930 [WARNING][4150] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479" HandleID="k8s-pod-network.ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479" Workload="localhost-k8s-csi--node--driver--dvjfd-eth0" Feb 9 09:43:54.938999 env[1222]: 2024-02-09 09:43:54.930 [INFO][4150] ipam_plugin.go 443: Releasing address using workloadID ContainerID="ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479" HandleID="k8s-pod-network.ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479" Workload="localhost-k8s-csi--node--driver--dvjfd-eth0" Feb 9 09:43:54.938999 env[1222]: 2024-02-09 09:43:54.933 [INFO][4150] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:43:54.938999 env[1222]: 2024-02-09 09:43:54.936 [INFO][4143] k8s.go 591: Teardown processing complete. ContainerID="ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479" Feb 9 09:43:54.940745 env[1222]: time="2024-02-09T09:43:54.940405738Z" level=info msg="TearDown network for sandbox \"ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479\" successfully" Feb 9 09:43:54.940745 env[1222]: time="2024-02-09T09:43:54.940441778Z" level=info msg="StopPodSandbox for \"ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479\" returns successfully" Feb 9 09:43:54.941438 systemd[1]: run-netns-cni\x2d964528ac\x2d4275\x2defd9\x2dd3b4\x2d5ec43539f13b.mount: Deactivated successfully. Feb 9 09:43:54.941996 env[1222]: time="2024-02-09T09:43:54.941941008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dvjfd,Uid:440d5d1f-6bf0-4b9d-b4ee-c3a791745bc9,Namespace:calico-system,Attempt:1,}" Feb 9 09:43:54.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.11:22-10.0.0.1:59546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:54.945507 systemd[1]: Started sshd@7-10.0.0.11:22-10.0.0.1:59546.service. Feb 9 09:43:54.962532 kubelet[2163]: I0209 09:43:54.961959 2163 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 9 09:43:54.962898 kubelet[2163]: E0209 09:43:54.962875 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:55.010000 audit[4160]: USER_ACCT pid=4160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:43:55.012159 sshd[4160]: Accepted publickey for core from 10.0.0.1 port 59546 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:43:55.013000 audit[4160]: CRED_ACQ pid=4160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:43:55.013000 audit[4160]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff7f4d570 a2=3 a3=1 items=0 ppid=1 pid=4160 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:55.013000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 09:43:55.015746 sshd[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:43:55.022232 systemd-logind[1203]: New session 8 of user core. Feb 9 09:43:55.022263 systemd[1]: Started session-8.scope. Feb 9 09:43:55.026000 audit[4160]: USER_START pid=4160 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:43:55.028000 audit[4184]: CRED_ACQ pid=4184 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:43:55.077267 systemd-networkd[1095]: cali485ad967624: Link UP Feb 9 09:43:55.078390 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:43:55.078502 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali485ad967624: link becomes ready Feb 9 09:43:55.079191 systemd-networkd[1095]: cali485ad967624: Gained carrier Feb 9 09:43:55.094740 env[1222]: 2024-02-09 09:43:54.999 [INFO][4161] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--dvjfd-eth0 csi-node-driver- calico-system 440d5d1f-6bf0-4b9d-b4ee-c3a791745bc9 783 0 2024-02-09 09:43:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7c77f88967 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-dvjfd eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali485ad967624 [] []}} ContainerID="3bc82d1af0917210c0c5a00e1eb3bd21a24d09f96bd0fb748cd461200f57e30e" Namespace="calico-system" Pod="csi-node-driver-dvjfd" WorkloadEndpoint="localhost-k8s-csi--node--driver--dvjfd-" Feb 9 09:43:55.094740 env[1222]: 2024-02-09 09:43:54.999 [INFO][4161] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="3bc82d1af0917210c0c5a00e1eb3bd21a24d09f96bd0fb748cd461200f57e30e" Namespace="calico-system" Pod="csi-node-driver-dvjfd" WorkloadEndpoint="localhost-k8s-csi--node--driver--dvjfd-eth0" Feb 9 09:43:55.094740 env[1222]: 2024-02-09 09:43:55.034 [INFO][4176] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3bc82d1af0917210c0c5a00e1eb3bd21a24d09f96bd0fb748cd461200f57e30e" HandleID="k8s-pod-network.3bc82d1af0917210c0c5a00e1eb3bd21a24d09f96bd0fb748cd461200f57e30e" Workload="localhost-k8s-csi--node--driver--dvjfd-eth0" Feb 9 09:43:55.094740 env[1222]: 2024-02-09 09:43:55.046 [INFO][4176] ipam_plugin.go 268: Auto assigning IP ContainerID="3bc82d1af0917210c0c5a00e1eb3bd21a24d09f96bd0fb748cd461200f57e30e" HandleID="k8s-pod-network.3bc82d1af0917210c0c5a00e1eb3bd21a24d09f96bd0fb748cd461200f57e30e" Workload="localhost-k8s-csi--node--driver--dvjfd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400029c7b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-dvjfd", "timestamp":"2024-02-09 09:43:55.034296111 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 09:43:55.094740 env[1222]: 2024-02-09 09:43:55.046 [INFO][4176] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:43:55.094740 env[1222]: 2024-02-09 09:43:55.046 [INFO][4176] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:43:55.094740 env[1222]: 2024-02-09 09:43:55.046 [INFO][4176] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 9 09:43:55.094740 env[1222]: 2024-02-09 09:43:55.047 [INFO][4176] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3bc82d1af0917210c0c5a00e1eb3bd21a24d09f96bd0fb748cd461200f57e30e" host="localhost" Feb 9 09:43:55.094740 env[1222]: 2024-02-09 09:43:55.051 [INFO][4176] ipam.go 372: Looking up existing affinities for host host="localhost" Feb 9 09:43:55.094740 env[1222]: 2024-02-09 09:43:55.054 [INFO][4176] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 9 09:43:55.094740 env[1222]: 2024-02-09 09:43:55.055 [INFO][4176] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 9 09:43:55.094740 env[1222]: 2024-02-09 09:43:55.057 [INFO][4176] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 9 09:43:55.094740 env[1222]: 2024-02-09 09:43:55.058 [INFO][4176] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3bc82d1af0917210c0c5a00e1eb3bd21a24d09f96bd0fb748cd461200f57e30e" host="localhost" Feb 9 09:43:55.094740 env[1222]: 2024-02-09 09:43:55.061 [INFO][4176] ipam.go 1682: Creating new handle: k8s-pod-network.3bc82d1af0917210c0c5a00e1eb3bd21a24d09f96bd0fb748cd461200f57e30e Feb 9 09:43:55.094740 env[1222]: 2024-02-09 09:43:55.065 [INFO][4176] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3bc82d1af0917210c0c5a00e1eb3bd21a24d09f96bd0fb748cd461200f57e30e" host="localhost" Feb 9 09:43:55.094740 env[1222]: 2024-02-09 09:43:55.071 [INFO][4176] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.3bc82d1af0917210c0c5a00e1eb3bd21a24d09f96bd0fb748cd461200f57e30e" host="localhost" Feb 9 09:43:55.094740 env[1222]: 2024-02-09 09:43:55.071 [INFO][4176] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.3bc82d1af0917210c0c5a00e1eb3bd21a24d09f96bd0fb748cd461200f57e30e" host="localhost" Feb 9 09:43:55.094740 env[1222]: 2024-02-09 09:43:55.071 [INFO][4176] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:43:55.094740 env[1222]: 2024-02-09 09:43:55.071 [INFO][4176] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="3bc82d1af0917210c0c5a00e1eb3bd21a24d09f96bd0fb748cd461200f57e30e" HandleID="k8s-pod-network.3bc82d1af0917210c0c5a00e1eb3bd21a24d09f96bd0fb748cd461200f57e30e" Workload="localhost-k8s-csi--node--driver--dvjfd-eth0" Feb 9 09:43:55.095324 env[1222]: 2024-02-09 09:43:55.074 [INFO][4161] k8s.go 385: Populated endpoint ContainerID="3bc82d1af0917210c0c5a00e1eb3bd21a24d09f96bd0fb748cd461200f57e30e" Namespace="calico-system" Pod="csi-node-driver-dvjfd" WorkloadEndpoint="localhost-k8s-csi--node--driver--dvjfd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dvjfd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"440d5d1f-6bf0-4b9d-b4ee-c3a791745bc9", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 43, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-dvjfd", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali485ad967624", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:43:55.095324 env[1222]: 2024-02-09 09:43:55.074 [INFO][4161] k8s.go 386: Calico CNI using IPs: [192.168.88.132/32] ContainerID="3bc82d1af0917210c0c5a00e1eb3bd21a24d09f96bd0fb748cd461200f57e30e" Namespace="calico-system" Pod="csi-node-driver-dvjfd" WorkloadEndpoint="localhost-k8s-csi--node--driver--dvjfd-eth0" Feb 9 09:43:55.095324 env[1222]: 2024-02-09 09:43:55.074 [INFO][4161] dataplane_linux.go 68: Setting the host side veth name to cali485ad967624 ContainerID="3bc82d1af0917210c0c5a00e1eb3bd21a24d09f96bd0fb748cd461200f57e30e" Namespace="calico-system" Pod="csi-node-driver-dvjfd" WorkloadEndpoint="localhost-k8s-csi--node--driver--dvjfd-eth0" Feb 9 09:43:55.095324 env[1222]: 2024-02-09 09:43:55.079 [INFO][4161] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="3bc82d1af0917210c0c5a00e1eb3bd21a24d09f96bd0fb748cd461200f57e30e" Namespace="calico-system" Pod="csi-node-driver-dvjfd" WorkloadEndpoint="localhost-k8s-csi--node--driver--dvjfd-eth0" Feb 9 09:43:55.095324 env[1222]: 2024-02-09 09:43:55.080 [INFO][4161] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="3bc82d1af0917210c0c5a00e1eb3bd21a24d09f96bd0fb748cd461200f57e30e" Namespace="calico-system" Pod="csi-node-driver-dvjfd" WorkloadEndpoint="localhost-k8s-csi--node--driver--dvjfd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dvjfd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"440d5d1f-6bf0-4b9d-b4ee-c3a791745bc9", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 43, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3bc82d1af0917210c0c5a00e1eb3bd21a24d09f96bd0fb748cd461200f57e30e", Pod:"csi-node-driver-dvjfd", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali485ad967624", MAC:"32:d9:6e:fb:28:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:43:55.095324 env[1222]: 2024-02-09 09:43:55.089 [INFO][4161] k8s.go 491: Wrote updated endpoint to datastore ContainerID="3bc82d1af0917210c0c5a00e1eb3bd21a24d09f96bd0fb748cd461200f57e30e" Namespace="calico-system" Pod="csi-node-driver-dvjfd" WorkloadEndpoint="localhost-k8s-csi--node--driver--dvjfd-eth0" Feb 9 09:43:55.110273 env[1222]: time="2024-02-09T09:43:55.110117812Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:43:55.110273 env[1222]: time="2024-02-09T09:43:55.110172291Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:43:55.110626 env[1222]: time="2024-02-09T09:43:55.110183411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:43:55.110898 env[1222]: time="2024-02-09T09:43:55.110859767Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3bc82d1af0917210c0c5a00e1eb3bd21a24d09f96bd0fb748cd461200f57e30e pid=4217 runtime=io.containerd.runc.v2 Feb 9 09:43:55.113000 audit[4218]: NETFILTER_CFG table=filter:126 family=2 entries=42 op=nft_register_chain pid=4218 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 09:43:55.113000 audit[4218]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20696 a0=3 a1=ffffdb9b1af0 a2=0 a3=ffffa1257fa8 items=0 ppid=3367 pid=4218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:43:55.113000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 09:43:55.198357 systemd-resolved[1155]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 09:43:55.209935 env[1222]: time="2024-02-09T09:43:55.209887994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dvjfd,Uid:440d5d1f-6bf0-4b9d-b4ee-c3a791745bc9,Namespace:calico-system,Attempt:1,} returns sandbox id \"3bc82d1af0917210c0c5a00e1eb3bd21a24d09f96bd0fb748cd461200f57e30e\"" Feb 9 09:43:55.211291 env[1222]: time="2024-02-09T09:43:55.211239745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\"" Feb 9 09:43:55.215677 sshd[4160]: pam_unix(sshd:session): session closed for user core Feb 9 09:43:55.215000 audit[4160]: USER_END pid=4160 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:43:55.215000 audit[4160]: CRED_DISP pid=4160 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:43:55.218903 systemd-logind[1203]: Session 8 logged out. Waiting for processes to exit. Feb 9 09:43:55.219198 systemd[1]: sshd@7-10.0.0.11:22-10.0.0.1:59546.service: Deactivated successfully. Feb 9 09:43:55.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.11:22-10.0.0.1:59546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:43:55.220116 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 09:43:55.221155 systemd-logind[1203]: Removed session 8. Feb 9 09:43:55.964798 kubelet[2163]: E0209 09:43:55.964749 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:56.504463 systemd-networkd[1095]: cali485ad967624: Gained IPv6LL Feb 9 09:43:56.737913 env[1222]: time="2024-02-09T09:43:56.737853211Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:56.739430 env[1222]: time="2024-02-09T09:43:56.739393321Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4b71e7439e0eba34a97844591560a009f37e8e6c17a386a34d416c1cc872dee8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:56.740887 env[1222]: time="2024-02-09T09:43:56.740862032Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:56.742160 env[1222]: time="2024-02-09T09:43:56.742116824Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:2b9021393c17e87ba8a3c89f5b3719941812f4e4751caa0b71eb2233bff48738,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:56.742598 env[1222]: time="2024-02-09T09:43:56.742572901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\" returns image reference \"sha256:4b71e7439e0eba34a97844591560a009f37e8e6c17a386a34d416c1cc872dee8\"" Feb 9 09:43:56.745195 env[1222]: time="2024-02-09T09:43:56.745141924Z" level=info msg="CreateContainer within sandbox \"3bc82d1af0917210c0c5a00e1eb3bd21a24d09f96bd0fb748cd461200f57e30e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 9 09:43:56.759111 env[1222]: time="2024-02-09T09:43:56.759017874Z" level=info msg="CreateContainer within sandbox \"3bc82d1af0917210c0c5a00e1eb3bd21a24d09f96bd0fb748cd461200f57e30e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b0b8dad83cccd6e7ee145afa340bf5b7241894c95349fd3e4fc02dba20a2264f\"" Feb 9 09:43:56.760985 env[1222]: time="2024-02-09T09:43:56.760955062Z" level=info msg="StartContainer for \"b0b8dad83cccd6e7ee145afa340bf5b7241894c95349fd3e4fc02dba20a2264f\"" Feb 9 09:43:56.849975 env[1222]: time="2024-02-09T09:43:56.849903046Z" level=info msg="StartContainer for \"b0b8dad83cccd6e7ee145afa340bf5b7241894c95349fd3e4fc02dba20a2264f\" returns successfully" Feb 9 09:43:56.851267 env[1222]: time="2024-02-09T09:43:56.851243197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\"" Feb 9 09:43:57.906156 kubelet[2163]: I0209 09:43:57.906115 2163 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 9 09:43:57.906952 kubelet[2163]: E0209 09:43:57.906926 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:57.987837 kubelet[2163]: E0209 09:43:57.987601 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:43:58.683020 env[1222]: time="2024-02-09T09:43:58.682974285Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:58.684717 env[1222]: time="2024-02-09T09:43:58.684677474Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9dbda087e98c46610fb8629cf530f1fe49eee4b17d2afe455664ca446ec39d43,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:58.686293 env[1222]: time="2024-02-09T09:43:58.686269024Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:58.691803 env[1222]: time="2024-02-09T09:43:58.691765470Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:45a7aba6020a7cf7b866cb8a8d481b30c97e9b3407e1459aaa65a5b4cc06633a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:43:58.692276 env[1222]: time="2024-02-09T09:43:58.692247107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\" returns image reference \"sha256:9dbda087e98c46610fb8629cf530f1fe49eee4b17d2afe455664ca446ec39d43\"" Feb 9 09:43:58.694374 env[1222]: time="2024-02-09T09:43:58.694343774Z" level=info msg="CreateContainer within sandbox \"3bc82d1af0917210c0c5a00e1eb3bd21a24d09f96bd0fb748cd461200f57e30e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 9 09:43:58.704684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4155925997.mount: Deactivated successfully. Feb 9 09:43:58.706765 env[1222]: time="2024-02-09T09:43:58.706728456Z" level=info msg="CreateContainer within sandbox \"3bc82d1af0917210c0c5a00e1eb3bd21a24d09f96bd0fb748cd461200f57e30e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ab8ed6ee51492d9e8e880d9a429834bced77d1a35bc56bb4cdeb6c581b859f32\"" Feb 9 09:43:58.708408 env[1222]: time="2024-02-09T09:43:58.707191693Z" level=info msg="StartContainer for \"ab8ed6ee51492d9e8e880d9a429834bced77d1a35bc56bb4cdeb6c581b859f32\"" Feb 9 09:43:58.796504 env[1222]: time="2024-02-09T09:43:58.796460455Z" level=info msg="StartContainer for \"ab8ed6ee51492d9e8e880d9a429834bced77d1a35bc56bb4cdeb6c581b859f32\" returns successfully" Feb 9 09:43:58.894543 kubelet[2163]: I0209 09:43:58.894496 2163 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 9 09:43:58.894709 kubelet[2163]: I0209 09:43:58.894655 2163 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 9 09:43:58.984283 kubelet[2163]: I0209 09:43:58.984185 2163 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-dvjfd" podStartSLOduration=-9.223372006870623e+09 pod.CreationTimestamp="2024-02-09 09:43:29 +0000 UTC" firstStartedPulling="2024-02-09 09:43:55.210972467 +0000 UTC m=+45.503752878" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:43:58.984008962 +0000 UTC m=+49.276789333" watchObservedRunningTime="2024-02-09 09:43:58.984152922 +0000 UTC m=+49.276933333" Feb 9 09:44:00.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.11:22-10.0.0.1:59554 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:44:00.218620 systemd[1]: Started sshd@8-10.0.0.11:22-10.0.0.1:59554.service. Feb 9 09:44:00.219612 kernel: kauditd_printk_skb: 16 callbacks suppressed Feb 9 09:44:00.219655 kernel: audit: type=1130 audit(1707471840.217:321): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.11:22-10.0.0.1:59554 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:44:00.266000 audit[4386]: USER_ACCT pid=4386 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:00.268031 sshd[4386]: Accepted publickey for core from 10.0.0.1 port 59554 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:44:00.267000 audit[4386]: CRED_ACQ pid=4386 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:00.274430 kernel: audit: type=1101 audit(1707471840.266:322): pid=4386 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:00.274521 kernel: audit: type=1103 audit(1707471840.267:323): pid=4386 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:00.274553 kernel: audit: type=1006 audit(1707471840.267:324): pid=4386 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Feb 9 09:44:00.274577 kernel: audit: type=1300 audit(1707471840.267:324): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffec064a50 a2=3 a3=1 items=0 ppid=1 pid=4386 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:44:00.267000 audit[4386]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffec064a50 a2=3 a3=1 items=0 ppid=1 pid=4386 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:44:00.272592 sshd[4386]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:44:00.267000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 09:44:00.279239 kernel: audit: type=1327 audit(1707471840.267:324): proctitle=737368643A20636F7265205B707269765D Feb 9 09:44:00.281963 systemd-logind[1203]: New session 9 of user core. Feb 9 09:44:00.282439 systemd[1]: Started session-9.scope. Feb 9 09:44:00.287000 audit[4386]: USER_START pid=4386 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:00.287000 audit[4389]: CRED_ACQ pid=4389 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:00.294506 kernel: audit: type=1105 audit(1707471840.287:325): pid=4386 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:00.294568 kernel: audit: type=1103 audit(1707471840.287:326): pid=4389 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:00.425514 sshd[4386]: pam_unix(sshd:session): session closed for user core Feb 9 09:44:00.425000 audit[4386]: USER_END pid=4386 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:00.425000 audit[4386]: CRED_DISP pid=4386 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:00.431148 systemd[1]: sshd@8-10.0.0.11:22-10.0.0.1:59554.service: Deactivated successfully. Feb 9 09:44:00.432221 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 09:44:00.432232 systemd-logind[1203]: Session 9 logged out. Waiting for processes to exit. Feb 9 09:44:00.433059 kernel: audit: type=1106 audit(1707471840.425:327): pid=4386 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:00.433114 kernel: audit: type=1104 audit(1707471840.425:328): pid=4386 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:00.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.11:22-10.0.0.1:59554 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:44:00.433249 systemd-logind[1203]: Removed session 9. Feb 9 09:44:05.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.11:22-10.0.0.1:43776 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:44:05.427822 systemd[1]: Started sshd@9-10.0.0.11:22-10.0.0.1:43776.service. Feb 9 09:44:05.428515 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 09:44:05.428544 kernel: audit: type=1130 audit(1707471845.426:330): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.11:22-10.0.0.1:43776 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:44:05.470238 sshd[4403]: Accepted publickey for core from 10.0.0.1 port 43776 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:44:05.468000 audit[4403]: USER_ACCT pid=4403 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:05.471472 sshd[4403]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:44:05.469000 audit[4403]: CRED_ACQ pid=4403 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:05.475506 kernel: audit: type=1101 audit(1707471845.468:331): pid=4403 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:05.475620 kernel: audit: type=1103 audit(1707471845.469:332): pid=4403 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:05.477200 kernel: audit: type=1006 audit(1707471845.469:333): pid=4403 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Feb 9 09:44:05.477261 kernel: audit: type=1300 audit(1707471845.469:333): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd5ef1810 a2=3 a3=1 items=0 ppid=1 pid=4403 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:44:05.469000 audit[4403]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd5ef1810 a2=3 a3=1 items=0 ppid=1 pid=4403 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:44:05.479406 systemd-logind[1203]: New session 10 of user core. Feb 9 09:44:05.479957 systemd[1]: Started session-10.scope. Feb 9 09:44:05.469000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 09:44:05.481681 kernel: audit: type=1327 audit(1707471845.469:333): proctitle=737368643A20636F7265205B707269765D Feb 9 09:44:05.484000 audit[4403]: USER_START pid=4403 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:05.486000 audit[4406]: CRED_ACQ pid=4406 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:05.490680 kernel: audit: type=1105 audit(1707471845.484:334): pid=4403 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:05.490755 kernel: audit: type=1103 audit(1707471845.486:335): pid=4406 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:05.605628 sshd[4403]: pam_unix(sshd:session): session closed for user core Feb 9 09:44:05.605000 audit[4403]: USER_END pid=4403 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:05.605000 audit[4403]: CRED_DISP pid=4403 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:05.608869 systemd[1]: sshd@9-10.0.0.11:22-10.0.0.1:43776.service: Deactivated successfully. Feb 9 09:44:05.609995 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 09:44:05.612598 kernel: audit: type=1106 audit(1707471845.605:336): pid=4403 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:05.612674 kernel: audit: type=1104 audit(1707471845.605:337): pid=4403 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:05.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.11:22-10.0.0.1:43776 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:44:05.613762 systemd-logind[1203]: Session 10 logged out. Waiting for processes to exit. Feb 9 09:44:05.616131 systemd-logind[1203]: Removed session 10. Feb 9 09:44:07.644265 kubelet[2163]: I0209 09:44:07.644230 2163 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 9 09:44:09.799144 env[1222]: time="2024-02-09T09:44:09.799103989Z" level=info msg="StopPodSandbox for \"e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4\"" Feb 9 09:44:09.870441 env[1222]: 2024-02-09 09:44:09.834 [WARNING][4479] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--4skrb-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"c7231818-222f-4202-aba4-773c0e8d3a5b", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 43, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7ac6c465d5c7370897ebb1d9a18b9f5e4ba76a193e5068b18e0f17c7d0dd7e43", Pod:"coredns-787d4945fb-4skrb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali66d522360bb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:44:09.870441 env[1222]: 2024-02-09 09:44:09.834 [INFO][4479] k8s.go 578: Cleaning up netns ContainerID="e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4" Feb 9 09:44:09.870441 env[1222]: 2024-02-09 09:44:09.834 [INFO][4479] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4" iface="eth0" netns="" Feb 9 09:44:09.870441 env[1222]: 2024-02-09 09:44:09.834 [INFO][4479] k8s.go 585: Releasing IP address(es) ContainerID="e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4" Feb 9 09:44:09.870441 env[1222]: 2024-02-09 09:44:09.834 [INFO][4479] utils.go 188: Calico CNI releasing IP address ContainerID="e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4" Feb 9 09:44:09.870441 env[1222]: 2024-02-09 09:44:09.856 [INFO][4487] ipam_plugin.go 415: Releasing address using handleID ContainerID="e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4" HandleID="k8s-pod-network.e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4" Workload="localhost-k8s-coredns--787d4945fb--4skrb-eth0" Feb 9 09:44:09.870441 env[1222]: 2024-02-09 09:44:09.856 [INFO][4487] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:44:09.870441 env[1222]: 2024-02-09 09:44:09.856 [INFO][4487] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:44:09.870441 env[1222]: 2024-02-09 09:44:09.865 [WARNING][4487] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4" HandleID="k8s-pod-network.e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4" Workload="localhost-k8s-coredns--787d4945fb--4skrb-eth0" Feb 9 09:44:09.870441 env[1222]: 2024-02-09 09:44:09.866 [INFO][4487] ipam_plugin.go 443: Releasing address using workloadID ContainerID="e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4" HandleID="k8s-pod-network.e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4" Workload="localhost-k8s-coredns--787d4945fb--4skrb-eth0" Feb 9 09:44:09.870441 env[1222]: 2024-02-09 09:44:09.867 [INFO][4487] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:44:09.870441 env[1222]: 2024-02-09 09:44:09.869 [INFO][4479] k8s.go 591: Teardown processing complete. ContainerID="e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4" Feb 9 09:44:09.870880 env[1222]: time="2024-02-09T09:44:09.870471720Z" level=info msg="TearDown network for sandbox \"e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4\" successfully" Feb 9 09:44:09.870880 env[1222]: time="2024-02-09T09:44:09.870500760Z" level=info msg="StopPodSandbox for \"e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4\" returns successfully" Feb 9 09:44:09.871295 env[1222]: time="2024-02-09T09:44:09.871267395Z" level=info msg="RemovePodSandbox for \"e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4\"" Feb 9 09:44:09.871380 env[1222]: time="2024-02-09T09:44:09.871312235Z" level=info msg="Forcibly stopping sandbox \"e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4\"" Feb 9 09:44:09.940834 env[1222]: 2024-02-09 09:44:09.907 [WARNING][4511] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--4skrb-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"c7231818-222f-4202-aba4-773c0e8d3a5b", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 43, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7ac6c465d5c7370897ebb1d9a18b9f5e4ba76a193e5068b18e0f17c7d0dd7e43", Pod:"coredns-787d4945fb-4skrb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali66d522360bb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:44:09.940834 env[1222]: 2024-02-09 09:44:09.907 [INFO][4511] k8s.go 578: Cleaning up netns ContainerID="e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4" Feb 9 09:44:09.940834 env[1222]: 2024-02-09 09:44:09.907 [INFO][4511] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4" iface="eth0" netns="" Feb 9 09:44:09.940834 env[1222]: 2024-02-09 09:44:09.907 [INFO][4511] k8s.go 585: Releasing IP address(es) ContainerID="e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4" Feb 9 09:44:09.940834 env[1222]: 2024-02-09 09:44:09.907 [INFO][4511] utils.go 188: Calico CNI releasing IP address ContainerID="e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4" Feb 9 09:44:09.940834 env[1222]: 2024-02-09 09:44:09.925 [INFO][4518] ipam_plugin.go 415: Releasing address using handleID ContainerID="e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4" HandleID="k8s-pod-network.e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4" Workload="localhost-k8s-coredns--787d4945fb--4skrb-eth0" Feb 9 09:44:09.940834 env[1222]: 2024-02-09 09:44:09.926 [INFO][4518] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:44:09.940834 env[1222]: 2024-02-09 09:44:09.926 [INFO][4518] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:44:09.940834 env[1222]: 2024-02-09 09:44:09.935 [WARNING][4518] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4" HandleID="k8s-pod-network.e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4" Workload="localhost-k8s-coredns--787d4945fb--4skrb-eth0" Feb 9 09:44:09.940834 env[1222]: 2024-02-09 09:44:09.935 [INFO][4518] ipam_plugin.go 443: Releasing address using workloadID ContainerID="e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4" HandleID="k8s-pod-network.e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4" Workload="localhost-k8s-coredns--787d4945fb--4skrb-eth0" Feb 9 09:44:09.940834 env[1222]: 2024-02-09 09:44:09.937 [INFO][4518] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:44:09.940834 env[1222]: 2024-02-09 09:44:09.939 [INFO][4511] k8s.go 591: Teardown processing complete. ContainerID="e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4" Feb 9 09:44:09.941266 env[1222]: time="2024-02-09T09:44:09.940871456Z" level=info msg="TearDown network for sandbox \"e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4\" successfully" Feb 9 09:44:09.943413 env[1222]: time="2024-02-09T09:44:09.943365722Z" level=info msg="RemovePodSandbox \"e5811bce1afeaae82a3dd283de85c34e8b780dabec177447824b55d51bf561a4\" returns successfully" Feb 9 09:44:09.943993 env[1222]: time="2024-02-09T09:44:09.943967439Z" level=info msg="StopPodSandbox for \"e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607\"" Feb 9 09:44:10.030277 env[1222]: 2024-02-09 09:44:09.999 [WARNING][4541] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5d664cd787--78hlq-eth0", GenerateName:"calico-kube-controllers-5d664cd787-", Namespace:"calico-system", SelfLink:"", UID:"74acb228-0bbd-4d33-80b3-a534d2c83208", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 43, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d664cd787", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"37f03f2ae789feaa8764797cacb4ca9cc377713a5c99f9dbe7a3b1eacbb8e145", Pod:"calico-kube-controllers-5d664cd787-78hlq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7f8e52e54c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:44:10.030277 env[1222]: 2024-02-09 09:44:09.999 [INFO][4541] k8s.go 578: Cleaning up netns ContainerID="e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607" Feb 9 09:44:10.030277 env[1222]: 2024-02-09 09:44:09.999 [INFO][4541] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607" iface="eth0" netns="" Feb 9 09:44:10.030277 env[1222]: 2024-02-09 09:44:09.999 [INFO][4541] k8s.go 585: Releasing IP address(es) ContainerID="e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607" Feb 9 09:44:10.030277 env[1222]: 2024-02-09 09:44:09.999 [INFO][4541] utils.go 188: Calico CNI releasing IP address ContainerID="e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607" Feb 9 09:44:10.030277 env[1222]: 2024-02-09 09:44:10.016 [INFO][4549] ipam_plugin.go 415: Releasing address using handleID ContainerID="e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607" HandleID="k8s-pod-network.e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607" Workload="localhost-k8s-calico--kube--controllers--5d664cd787--78hlq-eth0" Feb 9 09:44:10.030277 env[1222]: 2024-02-09 09:44:10.016 [INFO][4549] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:44:10.030277 env[1222]: 2024-02-09 09:44:10.016 [INFO][4549] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:44:10.030277 env[1222]: 2024-02-09 09:44:10.025 [WARNING][4549] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607" HandleID="k8s-pod-network.e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607" Workload="localhost-k8s-calico--kube--controllers--5d664cd787--78hlq-eth0" Feb 9 09:44:10.030277 env[1222]: 2024-02-09 09:44:10.025 [INFO][4549] ipam_plugin.go 443: Releasing address using workloadID ContainerID="e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607" HandleID="k8s-pod-network.e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607" Workload="localhost-k8s-calico--kube--controllers--5d664cd787--78hlq-eth0" Feb 9 09:44:10.030277 env[1222]: 2024-02-09 09:44:10.027 [INFO][4549] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:44:10.030277 env[1222]: 2024-02-09 09:44:10.029 [INFO][4541] k8s.go 591: Teardown processing complete. ContainerID="e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607" Feb 9 09:44:10.030710 env[1222]: time="2024-02-09T09:44:10.030289850Z" level=info msg="TearDown network for sandbox \"e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607\" successfully" Feb 9 09:44:10.030710 env[1222]: time="2024-02-09T09:44:10.030331090Z" level=info msg="StopPodSandbox for \"e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607\" returns successfully" Feb 9 09:44:10.030786 env[1222]: time="2024-02-09T09:44:10.030755408Z" level=info msg="RemovePodSandbox for \"e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607\"" Feb 9 09:44:10.030825 env[1222]: time="2024-02-09T09:44:10.030792487Z" level=info msg="Forcibly stopping sandbox \"e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607\"" Feb 9 09:44:10.102714 env[1222]: 2024-02-09 09:44:10.070 [WARNING][4572] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5d664cd787--78hlq-eth0", GenerateName:"calico-kube-controllers-5d664cd787-", Namespace:"calico-system", SelfLink:"", UID:"74acb228-0bbd-4d33-80b3-a534d2c83208", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 43, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d664cd787", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"37f03f2ae789feaa8764797cacb4ca9cc377713a5c99f9dbe7a3b1eacbb8e145", Pod:"calico-kube-controllers-5d664cd787-78hlq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7f8e52e54c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:44:10.102714 env[1222]: 2024-02-09 09:44:10.071 [INFO][4572] k8s.go 578: Cleaning up netns ContainerID="e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607" Feb 9 09:44:10.102714 env[1222]: 2024-02-09 09:44:10.071 [INFO][4572] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607" iface="eth0" netns="" Feb 9 09:44:10.102714 env[1222]: 2024-02-09 09:44:10.071 [INFO][4572] k8s.go 585: Releasing IP address(es) ContainerID="e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607" Feb 9 09:44:10.102714 env[1222]: 2024-02-09 09:44:10.071 [INFO][4572] utils.go 188: Calico CNI releasing IP address ContainerID="e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607" Feb 9 09:44:10.102714 env[1222]: 2024-02-09 09:44:10.088 [INFO][4580] ipam_plugin.go 415: Releasing address using handleID ContainerID="e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607" HandleID="k8s-pod-network.e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607" Workload="localhost-k8s-calico--kube--controllers--5d664cd787--78hlq-eth0" Feb 9 09:44:10.102714 env[1222]: 2024-02-09 09:44:10.088 [INFO][4580] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:44:10.102714 env[1222]: 2024-02-09 09:44:10.088 [INFO][4580] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:44:10.102714 env[1222]: 2024-02-09 09:44:10.098 [WARNING][4580] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607" HandleID="k8s-pod-network.e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607" Workload="localhost-k8s-calico--kube--controllers--5d664cd787--78hlq-eth0" Feb 9 09:44:10.102714 env[1222]: 2024-02-09 09:44:10.098 [INFO][4580] ipam_plugin.go 443: Releasing address using workloadID ContainerID="e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607" HandleID="k8s-pod-network.e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607" Workload="localhost-k8s-calico--kube--controllers--5d664cd787--78hlq-eth0" Feb 9 09:44:10.102714 env[1222]: 2024-02-09 09:44:10.100 [INFO][4580] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:44:10.102714 env[1222]: 2024-02-09 09:44:10.101 [INFO][4572] k8s.go 591: Teardown processing complete. ContainerID="e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607" Feb 9 09:44:10.102714 env[1222]: time="2024-02-09T09:44:10.102662459Z" level=info msg="TearDown network for sandbox \"e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607\" successfully" Feb 9 09:44:10.107469 env[1222]: time="2024-02-09T09:44:10.107424194Z" level=info msg="RemovePodSandbox \"e175f4dbbca11103091431b8d93a49677ad17df44f922cae9fadb45232b03607\" returns successfully" Feb 9 09:44:10.107930 env[1222]: time="2024-02-09T09:44:10.107889271Z" level=info msg="StopPodSandbox for \"ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479\"" Feb 9 09:44:10.172135 env[1222]: 2024-02-09 09:44:10.141 [WARNING][4602] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dvjfd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"440d5d1f-6bf0-4b9d-b4ee-c3a791745bc9", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 43, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3bc82d1af0917210c0c5a00e1eb3bd21a24d09f96bd0fb748cd461200f57e30e", Pod:"csi-node-driver-dvjfd", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali485ad967624", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:44:10.172135 env[1222]: 2024-02-09 09:44:10.142 [INFO][4602] k8s.go 578: Cleaning up netns ContainerID="ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479" Feb 9 09:44:10.172135 env[1222]: 2024-02-09 09:44:10.142 [INFO][4602] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479" iface="eth0" netns="" Feb 9 09:44:10.172135 env[1222]: 2024-02-09 09:44:10.142 [INFO][4602] k8s.go 585: Releasing IP address(es) ContainerID="ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479" Feb 9 09:44:10.172135 env[1222]: 2024-02-09 09:44:10.142 [INFO][4602] utils.go 188: Calico CNI releasing IP address ContainerID="ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479" Feb 9 09:44:10.172135 env[1222]: 2024-02-09 09:44:10.158 [INFO][4609] ipam_plugin.go 415: Releasing address using handleID ContainerID="ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479" HandleID="k8s-pod-network.ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479" Workload="localhost-k8s-csi--node--driver--dvjfd-eth0" Feb 9 09:44:10.172135 env[1222]: 2024-02-09 09:44:10.158 [INFO][4609] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:44:10.172135 env[1222]: 2024-02-09 09:44:10.158 [INFO][4609] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:44:10.172135 env[1222]: 2024-02-09 09:44:10.168 [WARNING][4609] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479" HandleID="k8s-pod-network.ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479" Workload="localhost-k8s-csi--node--driver--dvjfd-eth0" Feb 9 09:44:10.172135 env[1222]: 2024-02-09 09:44:10.168 [INFO][4609] ipam_plugin.go 443: Releasing address using workloadID ContainerID="ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479" HandleID="k8s-pod-network.ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479" Workload="localhost-k8s-csi--node--driver--dvjfd-eth0" Feb 9 09:44:10.172135 env[1222]: 2024-02-09 09:44:10.169 [INFO][4609] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:44:10.172135 env[1222]: 2024-02-09 09:44:10.170 [INFO][4602] k8s.go 591: Teardown processing complete. ContainerID="ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479" Feb 9 09:44:10.172590 env[1222]: time="2024-02-09T09:44:10.172161884Z" level=info msg="TearDown network for sandbox \"ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479\" successfully" Feb 9 09:44:10.172590 env[1222]: time="2024-02-09T09:44:10.172209124Z" level=info msg="StopPodSandbox for \"ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479\" returns successfully" Feb 9 09:44:10.172665 env[1222]: time="2024-02-09T09:44:10.172633841Z" level=info msg="RemovePodSandbox for \"ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479\"" Feb 9 09:44:10.172713 env[1222]: time="2024-02-09T09:44:10.172675441Z" level=info msg="Forcibly stopping sandbox \"ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479\"" Feb 9 09:44:10.235004 env[1222]: 2024-02-09 09:44:10.204 [WARNING][4631] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dvjfd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"440d5d1f-6bf0-4b9d-b4ee-c3a791745bc9", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 43, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3bc82d1af0917210c0c5a00e1eb3bd21a24d09f96bd0fb748cd461200f57e30e", Pod:"csi-node-driver-dvjfd", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali485ad967624", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:44:10.235004 env[1222]: 2024-02-09 09:44:10.204 [INFO][4631] k8s.go 578: Cleaning up netns ContainerID="ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479" Feb 9 09:44:10.235004 env[1222]: 2024-02-09 09:44:10.204 [INFO][4631] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479" iface="eth0" netns="" Feb 9 09:44:10.235004 env[1222]: 2024-02-09 09:44:10.204 [INFO][4631] k8s.go 585: Releasing IP address(es) ContainerID="ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479" Feb 9 09:44:10.235004 env[1222]: 2024-02-09 09:44:10.204 [INFO][4631] utils.go 188: Calico CNI releasing IP address ContainerID="ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479" Feb 9 09:44:10.235004 env[1222]: 2024-02-09 09:44:10.221 [INFO][4639] ipam_plugin.go 415: Releasing address using handleID ContainerID="ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479" HandleID="k8s-pod-network.ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479" Workload="localhost-k8s-csi--node--driver--dvjfd-eth0" Feb 9 09:44:10.235004 env[1222]: 2024-02-09 09:44:10.221 [INFO][4639] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:44:10.235004 env[1222]: 2024-02-09 09:44:10.221 [INFO][4639] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:44:10.235004 env[1222]: 2024-02-09 09:44:10.230 [WARNING][4639] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479" HandleID="k8s-pod-network.ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479" Workload="localhost-k8s-csi--node--driver--dvjfd-eth0" Feb 9 09:44:10.235004 env[1222]: 2024-02-09 09:44:10.230 [INFO][4639] ipam_plugin.go 443: Releasing address using workloadID ContainerID="ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479" HandleID="k8s-pod-network.ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479" Workload="localhost-k8s-csi--node--driver--dvjfd-eth0" Feb 9 09:44:10.235004 env[1222]: 2024-02-09 09:44:10.232 [INFO][4639] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:44:10.235004 env[1222]: 2024-02-09 09:44:10.233 [INFO][4631] k8s.go 591: Teardown processing complete. ContainerID="ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479" Feb 9 09:44:10.235440 env[1222]: time="2024-02-09T09:44:10.235033304Z" level=info msg="TearDown network for sandbox \"ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479\" successfully" Feb 9 09:44:10.237502 env[1222]: time="2024-02-09T09:44:10.237462371Z" level=info msg="RemovePodSandbox \"ddc9f081f916ed4bd4aebadc285784ced86345f87f7bad053dab040e74dfe479\" returns successfully" Feb 9 09:44:10.237917 env[1222]: time="2024-02-09T09:44:10.237885969Z" level=info msg="StopPodSandbox for \"820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c\"" Feb 9 09:44:10.314389 env[1222]: 2024-02-09 09:44:10.280 [WARNING][4661] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--8jgqw-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"2e9d6543-4145-4cad-b789-3d4de6afdcd4", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 43, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c3d1c92e6731ad84479c4900a9bf7a32ca3612e5fbb76857a0978d1b31ee609e", Pod:"coredns-787d4945fb-8jgqw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali95e16740f6a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:44:10.314389 env[1222]: 2024-02-09 09:44:10.281 [INFO][4661] k8s.go 578: Cleaning up netns ContainerID="820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c" Feb 9 09:44:10.314389 env[1222]: 2024-02-09 09:44:10.281 [INFO][4661] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c" iface="eth0" netns="" Feb 9 09:44:10.314389 env[1222]: 2024-02-09 09:44:10.281 [INFO][4661] k8s.go 585: Releasing IP address(es) ContainerID="820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c" Feb 9 09:44:10.314389 env[1222]: 2024-02-09 09:44:10.281 [INFO][4661] utils.go 188: Calico CNI releasing IP address ContainerID="820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c" Feb 9 09:44:10.314389 env[1222]: 2024-02-09 09:44:10.296 [INFO][4669] ipam_plugin.go 415: Releasing address using handleID ContainerID="820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c" HandleID="k8s-pod-network.820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c" Workload="localhost-k8s-coredns--787d4945fb--8jgqw-eth0" Feb 9 09:44:10.314389 env[1222]: 2024-02-09 09:44:10.296 [INFO][4669] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:44:10.314389 env[1222]: 2024-02-09 09:44:10.296 [INFO][4669] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:44:10.314389 env[1222]: 2024-02-09 09:44:10.305 [WARNING][4669] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c" HandleID="k8s-pod-network.820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c" Workload="localhost-k8s-coredns--787d4945fb--8jgqw-eth0" Feb 9 09:44:10.314389 env[1222]: 2024-02-09 09:44:10.305 [INFO][4669] ipam_plugin.go 443: Releasing address using workloadID ContainerID="820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c" HandleID="k8s-pod-network.820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c" Workload="localhost-k8s-coredns--787d4945fb--8jgqw-eth0" Feb 9 09:44:10.314389 env[1222]: 2024-02-09 09:44:10.307 [INFO][4669] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:44:10.314389 env[1222]: 2024-02-09 09:44:10.308 [INFO][4661] k8s.go 591: Teardown processing complete. ContainerID="820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c" Feb 9 09:44:10.314885 env[1222]: time="2024-02-09T09:44:10.314419476Z" level=info msg="TearDown network for sandbox \"820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c\" successfully" Feb 9 09:44:10.314885 env[1222]: time="2024-02-09T09:44:10.314454356Z" level=info msg="StopPodSandbox for \"820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c\" returns successfully" Feb 9 09:44:10.315534 env[1222]: time="2024-02-09T09:44:10.315507310Z" level=info msg="RemovePodSandbox for \"820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c\"" Feb 9 09:44:10.315611 env[1222]: time="2024-02-09T09:44:10.315543870Z" level=info msg="Forcibly stopping sandbox \"820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c\"" Feb 9 09:44:10.379676 env[1222]: 2024-02-09 09:44:10.348 [WARNING][4692] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--8jgqw-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"2e9d6543-4145-4cad-b789-3d4de6afdcd4", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 43, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c3d1c92e6731ad84479c4900a9bf7a32ca3612e5fbb76857a0978d1b31ee609e", Pod:"coredns-787d4945fb-8jgqw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali95e16740f6a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:44:10.379676 env[1222]: 2024-02-09 09:44:10.348 [INFO][4692] k8s.go 578: Cleaning up netns ContainerID="820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c" Feb 9 09:44:10.379676 env[1222]: 2024-02-09 09:44:10.348 [INFO][4692] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c" iface="eth0" netns="" Feb 9 09:44:10.379676 env[1222]: 2024-02-09 09:44:10.348 [INFO][4692] k8s.go 585: Releasing IP address(es) ContainerID="820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c" Feb 9 09:44:10.379676 env[1222]: 2024-02-09 09:44:10.348 [INFO][4692] utils.go 188: Calico CNI releasing IP address ContainerID="820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c" Feb 9 09:44:10.379676 env[1222]: 2024-02-09 09:44:10.365 [INFO][4700] ipam_plugin.go 415: Releasing address using handleID ContainerID="820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c" HandleID="k8s-pod-network.820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c" Workload="localhost-k8s-coredns--787d4945fb--8jgqw-eth0" Feb 9 09:44:10.379676 env[1222]: 2024-02-09 09:44:10.365 [INFO][4700] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:44:10.379676 env[1222]: 2024-02-09 09:44:10.365 [INFO][4700] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:44:10.379676 env[1222]: 2024-02-09 09:44:10.374 [WARNING][4700] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c" HandleID="k8s-pod-network.820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c" Workload="localhost-k8s-coredns--787d4945fb--8jgqw-eth0" Feb 9 09:44:10.379676 env[1222]: 2024-02-09 09:44:10.374 [INFO][4700] ipam_plugin.go 443: Releasing address using workloadID ContainerID="820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c" HandleID="k8s-pod-network.820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c" Workload="localhost-k8s-coredns--787d4945fb--8jgqw-eth0" Feb 9 09:44:10.379676 env[1222]: 2024-02-09 09:44:10.376 [INFO][4700] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:44:10.379676 env[1222]: 2024-02-09 09:44:10.378 [INFO][4692] k8s.go 591: Teardown processing complete. ContainerID="820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c" Feb 9 09:44:10.380858 env[1222]: time="2024-02-09T09:44:10.379641563Z" level=info msg="TearDown network for sandbox \"820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c\" successfully" Feb 9 09:44:10.383941 env[1222]: time="2024-02-09T09:44:10.383909980Z" level=info msg="RemovePodSandbox \"820a3826838fd8a716cf06ec044b26edc95b4dcf9350de3f0ac51cb25584c15c\" returns successfully" Feb 9 09:44:10.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.11:22-10.0.0.1:43784 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:44:10.608766 systemd[1]: Started sshd@10-10.0.0.11:22-10.0.0.1:43784.service. Feb 9 09:44:10.609482 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 09:44:10.609509 kernel: audit: type=1130 audit(1707471850.607:339): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.11:22-10.0.0.1:43784 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:44:10.653000 audit[4709]: USER_ACCT pid=4709 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:10.654871 sshd[4709]: Accepted publickey for core from 10.0.0.1 port 43784 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:44:10.656079 sshd[4709]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:44:10.654000 audit[4709]: CRED_ACQ pid=4709 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:10.659594 kernel: audit: type=1101 audit(1707471850.653:340): pid=4709 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:10.659695 kernel: audit: type=1103 audit(1707471850.654:341): pid=4709 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:10.659718 kernel: audit: type=1006 audit(1707471850.654:342): pid=4709 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Feb 9 09:44:10.660839 kernel: audit: type=1300 audit(1707471850.654:342): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcbe13fe0 a2=3 a3=1 items=0 ppid=1 pid=4709 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:44:10.654000 audit[4709]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcbe13fe0 a2=3 a3=1 items=0 ppid=1 pid=4709 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:44:10.663235 systemd[1]: Started session-11.scope. Feb 9 09:44:10.654000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 09:44:10.664200 systemd-logind[1203]: New session 11 of user core. Feb 9 09:44:10.664465 kernel: audit: type=1327 audit(1707471850.654:342): proctitle=737368643A20636F7265205B707269765D Feb 9 09:44:10.667000 audit[4709]: USER_START pid=4709 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:10.668000 audit[4712]: CRED_ACQ pid=4712 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:10.673130 kernel: audit: type=1105 audit(1707471850.667:343): pid=4709 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:10.673181 kernel: audit: type=1103 audit(1707471850.668:344): pid=4712 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:10.802492 sshd[4709]: pam_unix(sshd:session): session closed for user core Feb 9 09:44:10.802000 audit[4709]: USER_END pid=4709 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:10.805001 systemd[1]: Started sshd@11-10.0.0.11:22-10.0.0.1:43786.service. Feb 9 09:44:10.806087 systemd-logind[1203]: Session 11 logged out. Waiting for processes to exit. Feb 9 09:44:10.806183 systemd[1]: sshd@10-10.0.0.11:22-10.0.0.1:43784.service: Deactivated successfully. Feb 9 09:44:10.802000 audit[4709]: CRED_DISP pid=4709 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:10.807023 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 09:44:10.807534 systemd-logind[1203]: Removed session 11. Feb 9 09:44:10.809232 kernel: audit: type=1106 audit(1707471850.802:345): pid=4709 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:10.809321 kernel: audit: type=1104 audit(1707471850.802:346): pid=4709 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:10.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.11:22-10.0.0.1:43786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:44:10.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.11:22-10.0.0.1:43784 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:44:10.842000 audit[4722]: USER_ACCT pid=4722 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:10.844222 sshd[4722]: Accepted publickey for core from 10.0.0.1 port 43786 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:44:10.843000 audit[4722]: CRED_ACQ pid=4722 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:10.843000 audit[4722]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff12483d0 a2=3 a3=1 items=0 ppid=1 pid=4722 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:44:10.843000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 09:44:10.845556 sshd[4722]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:44:10.849363 systemd-logind[1203]: New session 12 of user core. Feb 9 09:44:10.849798 systemd[1]: Started session-12.scope. Feb 9 09:44:10.851000 audit[4722]: USER_START pid=4722 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:10.853000 audit[4727]: CRED_ACQ pid=4727 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:11.850277 sshd[4722]: pam_unix(sshd:session): session closed for user core Feb 9 09:44:11.854636 systemd[1]: Started sshd@12-10.0.0.11:22-10.0.0.1:43792.service. Feb 9 09:44:11.852000 audit[4722]: USER_END pid=4722 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:11.853000 audit[4722]: CRED_DISP pid=4722 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:11.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.11:22-10.0.0.1:43792 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:44:11.862919 systemd[1]: sshd@11-10.0.0.11:22-10.0.0.1:43786.service: Deactivated successfully. Feb 9 09:44:11.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.11:22-10.0.0.1:43786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:44:11.867373 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 09:44:11.867386 systemd-logind[1203]: Session 12 logged out. Waiting for processes to exit. Feb 9 09:44:11.871560 systemd-logind[1203]: Removed session 12. Feb 9 09:44:11.909000 audit[4735]: USER_ACCT pid=4735 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:11.910995 sshd[4735]: Accepted publickey for core from 10.0.0.1 port 43792 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:44:11.910000 audit[4735]: CRED_ACQ pid=4735 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:11.910000 audit[4735]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffce2f4470 a2=3 a3=1 items=0 ppid=1 pid=4735 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:44:11.910000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 09:44:11.912614 sshd[4735]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:44:11.916811 systemd[1]: Started session-13.scope. Feb 9 09:44:11.916989 systemd-logind[1203]: New session 13 of user core. Feb 9 09:44:11.919000 audit[4735]: USER_START pid=4735 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:11.920000 audit[4740]: CRED_ACQ pid=4740 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:12.027888 sshd[4735]: pam_unix(sshd:session): session closed for user core Feb 9 09:44:12.027000 audit[4735]: USER_END pid=4735 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:12.027000 audit[4735]: CRED_DISP pid=4735 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:12.030254 systemd[1]: sshd@12-10.0.0.11:22-10.0.0.1:43792.service: Deactivated successfully. Feb 9 09:44:12.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.11:22-10.0.0.1:43792 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:44:12.031292 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 09:44:12.031642 systemd-logind[1203]: Session 13 logged out. Waiting for processes to exit. Feb 9 09:44:12.032380 systemd-logind[1203]: Removed session 13. Feb 9 09:44:17.031034 systemd[1]: Started sshd@13-10.0.0.11:22-10.0.0.1:44372.service. Feb 9 09:44:17.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.11:22-10.0.0.1:44372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:44:17.033779 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 9 09:44:17.033866 kernel: audit: type=1130 audit(1707471857.029:366): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.11:22-10.0.0.1:44372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:44:17.064000 audit[4757]: USER_ACCT pid=4757 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:17.066446 sshd[4757]: Accepted publickey for core from 10.0.0.1 port 44372 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:44:17.067624 sshd[4757]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:44:17.065000 audit[4757]: CRED_ACQ pid=4757 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:17.070712 kernel: audit: type=1101 audit(1707471857.064:367): pid=4757 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:17.070782 kernel: audit: type=1103 audit(1707471857.065:368): pid=4757 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:17.070803 kernel: audit: type=1006 audit(1707471857.066:369): pid=4757 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Feb 9 09:44:17.072064 kernel: audit: type=1300 audit(1707471857.066:369): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc567ab10 a2=3 a3=1 items=0 ppid=1 pid=4757 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:44:17.066000 audit[4757]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc567ab10 a2=3 a3=1 items=0 ppid=1 pid=4757 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:44:17.073356 systemd-logind[1203]: New session 14 of user core. Feb 9 09:44:17.073660 systemd[1]: Started session-14.scope. Feb 9 09:44:17.074373 kernel: audit: type=1327 audit(1707471857.066:369): proctitle=737368643A20636F7265205B707269765D Feb 9 09:44:17.066000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 09:44:17.076000 audit[4757]: USER_START pid=4757 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:17.077000 audit[4760]: CRED_ACQ pid=4760 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:17.082130 kernel: audit: type=1105 audit(1707471857.076:370): pid=4757 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:17.082186 kernel: audit: type=1103 audit(1707471857.077:371): pid=4760 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:17.178968 sshd[4757]: pam_unix(sshd:session): session closed for user core Feb 9 09:44:17.179000 audit[4757]: USER_END pid=4757 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:17.181356 systemd[1]: Started sshd@14-10.0.0.11:22-10.0.0.1:44374.service. Feb 9 09:44:17.182899 systemd-logind[1203]: Session 14 logged out. Waiting for processes to exit. Feb 9 09:44:17.183071 systemd[1]: sshd@13-10.0.0.11:22-10.0.0.1:44372.service: Deactivated successfully. Feb 9 09:44:17.179000 audit[4757]: CRED_DISP pid=4757 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:17.183885 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 09:44:17.184325 systemd-logind[1203]: Removed session 14. Feb 9 09:44:17.185234 kernel: audit: type=1106 audit(1707471857.179:372): pid=4757 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:17.185295 kernel: audit: type=1104 audit(1707471857.179:373): pid=4757 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:17.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.11:22-10.0.0.1:44374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:44:17.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.11:22-10.0.0.1:44372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:44:17.217000 audit[4769]: USER_ACCT pid=4769 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:17.218678 sshd[4769]: Accepted publickey for core from 10.0.0.1 port 44374 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:44:17.218000 audit[4769]: CRED_ACQ pid=4769 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:17.218000 audit[4769]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffebc33000 a2=3 a3=1 items=0 ppid=1 pid=4769 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:44:17.218000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 09:44:17.220042 sshd[4769]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:44:17.223024 systemd-logind[1203]: New session 15 of user core. Feb 9 09:44:17.223793 systemd[1]: Started session-15.scope. Feb 9 09:44:17.226000 audit[4769]: USER_START pid=4769 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:17.227000 audit[4774]: CRED_ACQ pid=4774 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:17.432888 sshd[4769]: pam_unix(sshd:session): session closed for user core Feb 9 09:44:17.432000 audit[4769]: USER_END pid=4769 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:17.433000 audit[4769]: CRED_DISP pid=4769 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:17.435587 systemd[1]: Started sshd@15-10.0.0.11:22-10.0.0.1:44376.service. Feb 9 09:44:17.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.11:22-10.0.0.1:44376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:44:17.436526 systemd-logind[1203]: Session 15 logged out. Waiting for processes to exit. Feb 9 09:44:17.436680 systemd[1]: sshd@14-10.0.0.11:22-10.0.0.1:44374.service: Deactivated successfully. Feb 9 09:44:17.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.11:22-10.0.0.1:44374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:44:17.437713 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 09:44:17.438167 systemd-logind[1203]: Removed session 15. Feb 9 09:44:17.471000 audit[4782]: USER_ACCT pid=4782 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:17.472933 sshd[4782]: Accepted publickey for core from 10.0.0.1 port 44376 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:44:17.472000 audit[4782]: CRED_ACQ pid=4782 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:17.472000 audit[4782]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffc1d3230 a2=3 a3=1 items=0 ppid=1 pid=4782 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:44:17.472000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 09:44:17.474392 sshd[4782]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:44:17.477816 systemd-logind[1203]: New session 16 of user core. Feb 9 09:44:17.478656 systemd[1]: Started session-16.scope. Feb 9 09:44:17.481000 audit[4782]: USER_START pid=4782 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:17.483000 audit[4787]: CRED_ACQ pid=4787 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:18.271598 sshd[4782]: pam_unix(sshd:session): session closed for user core Feb 9 09:44:18.272000 audit[4782]: USER_END pid=4782 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:18.272000 audit[4782]: CRED_DISP pid=4782 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:18.274266 systemd[1]: Started sshd@16-10.0.0.11:22-10.0.0.1:44390.service. Feb 9 09:44:18.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.11:22-10.0.0.1:44390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:44:18.276227 systemd[1]: sshd@15-10.0.0.11:22-10.0.0.1:44376.service: Deactivated successfully. Feb 9 09:44:18.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.11:22-10.0.0.1:44376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:44:18.277403 systemd-logind[1203]: Session 16 logged out. Waiting for processes to exit. Feb 9 09:44:18.277438 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 09:44:18.278210 systemd-logind[1203]: Removed session 16. Feb 9 09:44:18.318000 audit[4827]: NETFILTER_CFG table=filter:127 family=2 entries=6 op=nft_register_rule pid=4827 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:44:18.318000 audit[4827]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffe6b17190 a2=0 a3=ffff8e9f56c0 items=0 ppid=2347 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:44:18.318000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:44:18.318000 audit[4804]: USER_ACCT pid=4804 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:18.320445 sshd[4804]: Accepted publickey for core from 10.0.0.1 port 44390 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:44:18.320000 audit[4804]: CRED_ACQ pid=4804 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:18.320000 audit[4804]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffce217470 a2=3 a3=1 items=0 ppid=1 pid=4804 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:44:18.320000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 09:44:18.322009 sshd[4804]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:44:18.319000 audit[4827]: NETFILTER_CFG table=nat:128 family=2 entries=78 op=nft_register_rule pid=4827 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:44:18.319000 audit[4827]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffe6b17190 a2=0 a3=ffff8e9f56c0 items=0 ppid=2347 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:44:18.319000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:44:18.326332 systemd-logind[1203]: New session 17 of user core. Feb 9 09:44:18.326740 systemd[1]: Started session-17.scope. Feb 9 09:44:18.330000 audit[4804]: USER_START pid=4804 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:18.331000 audit[4833]: CRED_ACQ pid=4833 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:18.359000 audit[4855]: NETFILTER_CFG table=filter:129 family=2 entries=18 op=nft_register_rule pid=4855 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:44:18.359000 audit[4855]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10364 a0=3 a1=ffffda4237e0 a2=0 a3=ffff8cbfc6c0 items=0 ppid=2347 pid=4855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:44:18.359000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:44:18.361000 audit[4855]: NETFILTER_CFG table=nat:130 family=2 entries=78 op=nft_register_rule pid=4855 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:44:18.361000 audit[4855]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffda4237e0 a2=0 a3=ffff8cbfc6c0 items=0 ppid=2347 pid=4855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:44:18.361000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:44:18.543345 sshd[4804]: pam_unix(sshd:session): session closed for user core Feb 9 09:44:18.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.11:22-10.0.0.1:44404 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:44:18.545000 audit[4804]: USER_END pid=4804 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:18.545000 audit[4804]: CRED_DISP pid=4804 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:18.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.11:22-10.0.0.1:44390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:44:18.544688 systemd[1]: Started sshd@17-10.0.0.11:22-10.0.0.1:44404.service. Feb 9 09:44:18.548646 systemd[1]: sshd@16-10.0.0.11:22-10.0.0.1:44390.service: Deactivated successfully. Feb 9 09:44:18.550780 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 09:44:18.551478 systemd-logind[1203]: Session 17 logged out. Waiting for processes to exit. Feb 9 09:44:18.553771 systemd-logind[1203]: Removed session 17. Feb 9 09:44:18.581000 audit[4862]: USER_ACCT pid=4862 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:18.582592 sshd[4862]: Accepted publickey for core from 10.0.0.1 port 44404 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:44:18.582000 audit[4862]: CRED_ACQ pid=4862 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:18.582000 audit[4862]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffce828c20 a2=3 a3=1 items=0 ppid=1 pid=4862 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:44:18.582000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 09:44:18.583673 sshd[4862]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:44:18.586788 systemd-logind[1203]: New session 18 of user core. Feb 9 09:44:18.587659 systemd[1]: Started session-18.scope. Feb 9 09:44:18.590000 audit[4862]: USER_START pid=4862 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:18.591000 audit[4867]: CRED_ACQ pid=4867 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:18.704866 sshd[4862]: pam_unix(sshd:session): session closed for user core Feb 9 09:44:18.704000 audit[4862]: USER_END pid=4862 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:18.704000 audit[4862]: CRED_DISP pid=4862 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:18.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.11:22-10.0.0.1:44404 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:44:18.707143 systemd[1]: sshd@17-10.0.0.11:22-10.0.0.1:44404.service: Deactivated successfully. Feb 9 09:44:18.708328 systemd-logind[1203]: Session 18 logged out. Waiting for processes to exit. Feb 9 09:44:18.708426 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 09:44:18.709371 systemd-logind[1203]: Removed session 18. Feb 9 09:44:19.849530 kubelet[2163]: E0209 09:44:19.849498 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:23.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.11:22-10.0.0.1:49926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:44:23.708170 systemd[1]: Started sshd@18-10.0.0.11:22-10.0.0.1:49926.service. Feb 9 09:44:23.710960 kernel: kauditd_printk_skb: 57 callbacks suppressed Feb 9 09:44:23.711045 kernel: audit: type=1130 audit(1707471863.707:415): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.11:22-10.0.0.1:49926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:44:23.742000 audit[4901]: USER_ACCT pid=4901 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:23.744000 sshd[4901]: Accepted publickey for core from 10.0.0.1 port 49926 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:44:23.745486 sshd[4901]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:44:23.743000 audit[4901]: CRED_ACQ pid=4901 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:23.748324 kernel: audit: type=1101 audit(1707471863.742:416): pid=4901 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:23.748386 kernel: audit: type=1103 audit(1707471863.743:417): pid=4901 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:23.748412 kernel: audit: type=1006 audit(1707471863.743:418): pid=4901 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Feb 9 09:44:23.749311 systemd-logind[1203]: New session 19 of user core. Feb 9 09:44:23.750032 kernel: audit: type=1300 audit(1707471863.743:418): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffedf520a0 a2=3 a3=1 items=0 ppid=1 pid=4901 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:44:23.743000 audit[4901]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffedf520a0 a2=3 a3=1 items=0 ppid=1 pid=4901 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:44:23.749703 systemd[1]: Started session-19.scope. Feb 9 09:44:23.743000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 09:44:23.752941 kernel: audit: type=1327 audit(1707471863.743:418): proctitle=737368643A20636F7265205B707269765D Feb 9 09:44:23.752000 audit[4901]: USER_START pid=4901 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:23.753000 audit[4904]: CRED_ACQ pid=4904 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:23.760220 kernel: audit: type=1105 audit(1707471863.752:419): pid=4901 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:23.760278 kernel: audit: type=1103 audit(1707471863.753:420): pid=4904 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:23.863717 sshd[4901]: pam_unix(sshd:session): session closed for user core Feb 9 09:44:23.863000 audit[4901]: USER_END pid=4901 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:23.863000 audit[4901]: CRED_DISP pid=4901 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:23.868592 systemd[1]: sshd@18-10.0.0.11:22-10.0.0.1:49926.service: Deactivated successfully. Feb 9 09:44:23.870012 kernel: audit: type=1106 audit(1707471863.863:421): pid=4901 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:23.870074 kernel: audit: type=1104 audit(1707471863.863:422): pid=4901 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:23.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.11:22-10.0.0.1:49926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:44:23.869607 systemd-logind[1203]: Session 19 logged out. Waiting for processes to exit. Feb 9 09:44:23.869691 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 09:44:23.870592 systemd-logind[1203]: Removed session 19. Feb 9 09:44:24.018000 audit[4940]: NETFILTER_CFG table=filter:131 family=2 entries=18 op=nft_register_rule pid=4940 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:44:24.018000 audit[4940]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffffdcd0f0 a2=0 a3=ffffaccb06c0 items=0 ppid=2347 pid=4940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:44:24.018000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:44:24.021000 audit[4940]: NETFILTER_CFG table=nat:132 family=2 entries=162 op=nft_register_chain pid=4940 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:44:24.021000 audit[4940]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=66940 a0=3 a1=ffffffdcd0f0 a2=0 a3=ffffaccb06c0 items=0 ppid=2347 pid=4940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:44:24.021000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:44:27.848439 kubelet[2163]: E0209 09:44:27.848403 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:28.866520 systemd[1]: Started sshd@19-10.0.0.11:22-10.0.0.1:49930.service. Feb 9 09:44:28.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.11:22-10.0.0.1:49930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:44:28.869214 kernel: kauditd_printk_skb: 7 callbacks suppressed Feb 9 09:44:28.869272 kernel: audit: type=1130 audit(1707471868.865:426): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.11:22-10.0.0.1:49930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:44:28.900000 audit[4973]: USER_ACCT pid=4973 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:28.901858 sshd[4973]: Accepted publickey for core from 10.0.0.1 port 49930 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:44:28.903011 sshd[4973]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:44:28.901000 audit[4973]: CRED_ACQ pid=4973 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:28.906710 kernel: audit: type=1101 audit(1707471868.900:427): pid=4973 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:28.906772 kernel: audit: type=1103 audit(1707471868.901:428): pid=4973 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:28.906805 kernel: audit: type=1006 audit(1707471868.901:429): pid=4973 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Feb 9 09:44:28.906491 systemd-logind[1203]: New session 20 of user core. Feb 9 09:44:28.906769 systemd[1]: Started session-20.scope. Feb 9 09:44:28.901000 audit[4973]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc39104b0 a2=3 a3=1 items=0 ppid=1 pid=4973 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:44:28.910068 kernel: audit: type=1300 audit(1707471868.901:429): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc39104b0 a2=3 a3=1 items=0 ppid=1 pid=4973 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:44:28.901000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 09:44:28.911095 kernel: audit: type=1327 audit(1707471868.901:429): proctitle=737368643A20636F7265205B707269765D Feb 9 09:44:28.911199 kernel: audit: type=1105 audit(1707471868.909:430): pid=4973 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:28.909000 audit[4973]: USER_START pid=4973 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:28.910000 audit[4976]: CRED_ACQ pid=4976 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:28.915936 kernel: audit: type=1103 audit(1707471868.910:431): pid=4976 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:29.013596 sshd[4973]: pam_unix(sshd:session): session closed for user core Feb 9 09:44:29.013000 audit[4973]: USER_END pid=4973 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:29.016041 systemd-logind[1203]: Session 20 logged out. Waiting for processes to exit. Feb 9 09:44:29.016235 systemd[1]: sshd@19-10.0.0.11:22-10.0.0.1:49930.service: Deactivated successfully. Feb 9 09:44:29.017069 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 09:44:29.013000 audit[4973]: CRED_DISP pid=4973 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:29.017518 systemd-logind[1203]: Removed session 20. Feb 9 09:44:29.019263 kernel: audit: type=1106 audit(1707471869.013:432): pid=4973 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:29.019342 kernel: audit: type=1104 audit(1707471869.013:433): pid=4973 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:29.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.11:22-10.0.0.1:49930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:44:31.848876 kubelet[2163]: E0209 09:44:31.848845 2163 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 09:44:33.917248 kubelet[2163]: I0209 09:44:33.917195 2163 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:44:33.967000 audit[5013]: NETFILTER_CFG table=filter:133 family=2 entries=7 op=nft_register_rule pid=5013 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:44:33.968812 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 09:44:33.968886 kernel: audit: type=1325 audit(1707471873.967:435): table=filter:133 family=2 entries=7 op=nft_register_rule pid=5013 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:44:33.967000 audit[5013]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=fffff92d9850 a2=0 a3=ffffb1eef6c0 items=0 ppid=2347 pid=5013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:44:33.973354 kernel: audit: type=1300 audit(1707471873.967:435): arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=fffff92d9850 a2=0 a3=ffffb1eef6c0 items=0 ppid=2347 pid=5013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:44:33.973418 kernel: audit: type=1327 audit(1707471873.967:435): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:44:33.967000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:44:33.970000 audit[5013]: NETFILTER_CFG table=nat:134 family=2 entries=198 op=nft_register_rule pid=5013 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:44:33.970000 audit[5013]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=66940 a0=3 a1=fffff92d9850 a2=0 a3=ffffb1eef6c0 items=0 ppid=2347 pid=5013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:44:33.981223 kernel: audit: type=1325 audit(1707471873.970:436): table=nat:134 family=2 entries=198 op=nft_register_rule pid=5013 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:44:33.981280 kernel: audit: type=1300 audit(1707471873.970:436): arch=c00000b7 syscall=211 success=yes exit=66940 a0=3 a1=fffff92d9850 a2=0 a3=ffffb1eef6c0 items=0 ppid=2347 pid=5013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:44:33.981355 kernel: audit: type=1327 audit(1707471873.970:436): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:44:33.970000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:44:34.015000 audit[5039]: NETFILTER_CFG table=filter:135 family=2 entries=8 op=nft_register_rule pid=5039 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:44:34.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.11:22-10.0.0.1:53682 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:44:34.016551 systemd[1]: Started sshd@20-10.0.0.11:22-10.0.0.1:53682.service. Feb 9 09:44:34.019262 kernel: audit: type=1325 audit(1707471874.015:437): table=filter:135 family=2 entries=8 op=nft_register_rule pid=5039 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:44:34.019323 kernel: audit: type=1130 audit(1707471874.016:438): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.11:22-10.0.0.1:53682 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:44:34.015000 audit[5039]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffedd8b4c0 a2=0 a3=ffff9bb316c0 items=0 ppid=2347 pid=5039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:44:34.023030 kernel: audit: type=1300 audit(1707471874.015:437): arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffedd8b4c0 a2=0 a3=ffff9bb316c0 items=0 ppid=2347 pid=5039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:44:34.023092 kernel: audit: type=1327 audit(1707471874.015:437): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:44:34.015000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:44:34.025000 audit[5039]: NETFILTER_CFG table=nat:136 family=2 entries=198 op=nft_register_rule pid=5039 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:44:34.025000 audit[5039]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=66940 a0=3 a1=ffffedd8b4c0 a2=0 a3=ffff9bb316c0 items=0 ppid=2347 pid=5039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:44:34.025000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:44:34.059000 audit[5040]: USER_ACCT pid=5040 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:34.060346 sshd[5040]: Accepted publickey for core from 10.0.0.1 port 53682 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 09:44:34.061000 audit[5040]: CRED_ACQ pid=5040 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:34.061000 audit[5040]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffcfc2040 a2=3 a3=1 items=0 ppid=1 pid=5040 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:44:34.061000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 09:44:34.062312 sshd[5040]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:44:34.066186 systemd-logind[1203]: New session 21 of user core. Feb 9 09:44:34.066998 systemd[1]: Started session-21.scope. Feb 9 09:44:34.073000 audit[5040]: USER_START pid=5040 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:34.074000 audit[5043]: CRED_ACQ pid=5043 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:34.087724 kubelet[2163]: I0209 09:44:34.087673 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qp2m\" (UniqueName: \"kubernetes.io/projected/fa53c5e3-3b0a-431a-96eb-d9d1f7d73aa2-kube-api-access-9qp2m\") pod \"calico-apiserver-66467cb67d-zdlnb\" (UID: \"fa53c5e3-3b0a-431a-96eb-d9d1f7d73aa2\") " pod="calico-apiserver/calico-apiserver-66467cb67d-zdlnb" Feb 9 09:44:34.087850 kubelet[2163]: I0209 09:44:34.087783 2163 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fa53c5e3-3b0a-431a-96eb-d9d1f7d73aa2-calico-apiserver-certs\") pod \"calico-apiserver-66467cb67d-zdlnb\" (UID: \"fa53c5e3-3b0a-431a-96eb-d9d1f7d73aa2\") " pod="calico-apiserver/calico-apiserver-66467cb67d-zdlnb" Feb 9 09:44:34.181541 sshd[5040]: pam_unix(sshd:session): session closed for user core Feb 9 09:44:34.182000 audit[5040]: USER_END pid=5040 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:34.182000 audit[5040]: CRED_DISP pid=5040 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 09:44:34.183844 systemd[1]: sshd@20-10.0.0.11:22-10.0.0.1:53682.service: Deactivated successfully. Feb 9 09:44:34.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.11:22-10.0.0.1:53682 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:44:34.184888 systemd-logind[1203]: Session 21 logged out. Waiting for processes to exit. Feb 9 09:44:34.184936 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 09:44:34.185686 systemd-logind[1203]: Removed session 21. Feb 9 09:44:34.188214 kubelet[2163]: E0209 09:44:34.188176 2163 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Feb 9 09:44:34.189721 kubelet[2163]: E0209 09:44:34.188463 2163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa53c5e3-3b0a-431a-96eb-d9d1f7d73aa2-calico-apiserver-certs podName:fa53c5e3-3b0a-431a-96eb-d9d1f7d73aa2 nodeName:}" failed. No retries permitted until 2024-02-09 09:44:34.68824585 +0000 UTC m=+84.981026221 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/fa53c5e3-3b0a-431a-96eb-d9d1f7d73aa2-calico-apiserver-certs") pod "calico-apiserver-66467cb67d-zdlnb" (UID: "fa53c5e3-3b0a-431a-96eb-d9d1f7d73aa2") : secret "calico-apiserver-certs" not found Feb 9 09:44:34.691222 kubelet[2163]: E0209 09:44:34.691186 2163 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Feb 9 09:44:34.691499 kubelet[2163]: E0209 09:44:34.691482 2163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa53c5e3-3b0a-431a-96eb-d9d1f7d73aa2-calico-apiserver-certs podName:fa53c5e3-3b0a-431a-96eb-d9d1f7d73aa2 nodeName:}" failed. No retries permitted until 2024-02-09 09:44:35.691464001 +0000 UTC m=+85.984244412 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/fa53c5e3-3b0a-431a-96eb-d9d1f7d73aa2-calico-apiserver-certs") pod "calico-apiserver-66467cb67d-zdlnb" (UID: "fa53c5e3-3b0a-431a-96eb-d9d1f7d73aa2") : secret "calico-apiserver-certs" not found Feb 9 09:44:35.720961 env[1222]: time="2024-02-09T09:44:35.720908328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66467cb67d-zdlnb,Uid:fa53c5e3-3b0a-431a-96eb-d9d1f7d73aa2,Namespace:calico-apiserver,Attempt:0,}" Feb 9 09:44:35.843627 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:44:35.843746 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calibcec372f6fe: link becomes ready Feb 9 09:44:35.842888 systemd-networkd[1095]: calibcec372f6fe: Link UP Feb 9 09:44:35.843148 systemd-networkd[1095]: calibcec372f6fe: Gained carrier Feb 9 09:44:35.863125 env[1222]: 2024-02-09 09:44:35.758 [INFO][5058] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--66467cb67d--zdlnb-eth0 calico-apiserver-66467cb67d- calico-apiserver fa53c5e3-3b0a-431a-96eb-d9d1f7d73aa2 1097 0 2024-02-09 09:44:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66467cb67d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-66467cb67d-zdlnb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibcec372f6fe [] []}} ContainerID="e091ef4e1d1f0f9f1590dfffb7920ceb6e22a3069961fbd8978aea0ba5d9476f" Namespace="calico-apiserver" Pod="calico-apiserver-66467cb67d-zdlnb" WorkloadEndpoint="localhost-k8s-calico--apiserver--66467cb67d--zdlnb-" Feb 9 09:44:35.863125 env[1222]: 2024-02-09 09:44:35.758 [INFO][5058] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="e091ef4e1d1f0f9f1590dfffb7920ceb6e22a3069961fbd8978aea0ba5d9476f" Namespace="calico-apiserver" Pod="calico-apiserver-66467cb67d-zdlnb" WorkloadEndpoint="localhost-k8s-calico--apiserver--66467cb67d--zdlnb-eth0" Feb 9 09:44:35.863125 env[1222]: 2024-02-09 09:44:35.788 [INFO][5071] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e091ef4e1d1f0f9f1590dfffb7920ceb6e22a3069961fbd8978aea0ba5d9476f" HandleID="k8s-pod-network.e091ef4e1d1f0f9f1590dfffb7920ceb6e22a3069961fbd8978aea0ba5d9476f" Workload="localhost-k8s-calico--apiserver--66467cb67d--zdlnb-eth0" Feb 9 09:44:35.863125 env[1222]: 2024-02-09 09:44:35.806 [INFO][5071] ipam_plugin.go 268: Auto assigning IP ContainerID="e091ef4e1d1f0f9f1590dfffb7920ceb6e22a3069961fbd8978aea0ba5d9476f" HandleID="k8s-pod-network.e091ef4e1d1f0f9f1590dfffb7920ceb6e22a3069961fbd8978aea0ba5d9476f" Workload="localhost-k8s-calico--apiserver--66467cb67d--zdlnb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000245300), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-66467cb67d-zdlnb", "timestamp":"2024-02-09 09:44:35.78806734 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 09:44:35.863125 env[1222]: 2024-02-09 09:44:35.806 [INFO][5071] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:44:35.863125 env[1222]: 2024-02-09 09:44:35.808 [INFO][5071] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:44:35.863125 env[1222]: 2024-02-09 09:44:35.808 [INFO][5071] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 9 09:44:35.863125 env[1222]: 2024-02-09 09:44:35.811 [INFO][5071] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e091ef4e1d1f0f9f1590dfffb7920ceb6e22a3069961fbd8978aea0ba5d9476f" host="localhost" Feb 9 09:44:35.863125 env[1222]: 2024-02-09 09:44:35.816 [INFO][5071] ipam.go 372: Looking up existing affinities for host host="localhost" Feb 9 09:44:35.863125 env[1222]: 2024-02-09 09:44:35.821 [INFO][5071] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 9 09:44:35.863125 env[1222]: 2024-02-09 09:44:35.824 [INFO][5071] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 9 09:44:35.863125 env[1222]: 2024-02-09 09:44:35.827 [INFO][5071] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 9 09:44:35.863125 env[1222]: 2024-02-09 09:44:35.827 [INFO][5071] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e091ef4e1d1f0f9f1590dfffb7920ceb6e22a3069961fbd8978aea0ba5d9476f" host="localhost" Feb 9 09:44:35.863125 env[1222]: 2024-02-09 09:44:35.829 [INFO][5071] ipam.go 1682: Creating new handle: k8s-pod-network.e091ef4e1d1f0f9f1590dfffb7920ceb6e22a3069961fbd8978aea0ba5d9476f Feb 9 09:44:35.863125 env[1222]: 2024-02-09 09:44:35.832 [INFO][5071] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e091ef4e1d1f0f9f1590dfffb7920ceb6e22a3069961fbd8978aea0ba5d9476f" host="localhost" Feb 9 09:44:35.863125 env[1222]: 2024-02-09 09:44:35.837 [INFO][5071] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.e091ef4e1d1f0f9f1590dfffb7920ceb6e22a3069961fbd8978aea0ba5d9476f" host="localhost" Feb 9 09:44:35.863125 env[1222]: 2024-02-09 09:44:35.837 [INFO][5071] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.e091ef4e1d1f0f9f1590dfffb7920ceb6e22a3069961fbd8978aea0ba5d9476f" host="localhost" Feb 9 09:44:35.863125 env[1222]: 2024-02-09 09:44:35.837 [INFO][5071] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:44:35.863125 env[1222]: 2024-02-09 09:44:35.837 [INFO][5071] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="e091ef4e1d1f0f9f1590dfffb7920ceb6e22a3069961fbd8978aea0ba5d9476f" HandleID="k8s-pod-network.e091ef4e1d1f0f9f1590dfffb7920ceb6e22a3069961fbd8978aea0ba5d9476f" Workload="localhost-k8s-calico--apiserver--66467cb67d--zdlnb-eth0" Feb 9 09:44:35.864738 env[1222]: 2024-02-09 09:44:35.839 [INFO][5058] k8s.go 385: Populated endpoint ContainerID="e091ef4e1d1f0f9f1590dfffb7920ceb6e22a3069961fbd8978aea0ba5d9476f" Namespace="calico-apiserver" Pod="calico-apiserver-66467cb67d-zdlnb" WorkloadEndpoint="localhost-k8s-calico--apiserver--66467cb67d--zdlnb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66467cb67d--zdlnb-eth0", GenerateName:"calico-apiserver-66467cb67d-", Namespace:"calico-apiserver", SelfLink:"", UID:"fa53c5e3-3b0a-431a-96eb-d9d1f7d73aa2", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 44, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66467cb67d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-66467cb67d-zdlnb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibcec372f6fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:44:35.864738 env[1222]: 2024-02-09 09:44:35.839 [INFO][5058] k8s.go 386: Calico CNI using IPs: [192.168.88.133/32] ContainerID="e091ef4e1d1f0f9f1590dfffb7920ceb6e22a3069961fbd8978aea0ba5d9476f" Namespace="calico-apiserver" Pod="calico-apiserver-66467cb67d-zdlnb" WorkloadEndpoint="localhost-k8s-calico--apiserver--66467cb67d--zdlnb-eth0" Feb 9 09:44:35.864738 env[1222]: 2024-02-09 09:44:35.839 [INFO][5058] dataplane_linux.go 68: Setting the host side veth name to calibcec372f6fe ContainerID="e091ef4e1d1f0f9f1590dfffb7920ceb6e22a3069961fbd8978aea0ba5d9476f" Namespace="calico-apiserver" Pod="calico-apiserver-66467cb67d-zdlnb" WorkloadEndpoint="localhost-k8s-calico--apiserver--66467cb67d--zdlnb-eth0" Feb 9 09:44:35.864738 env[1222]: 2024-02-09 09:44:35.844 [INFO][5058] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e091ef4e1d1f0f9f1590dfffb7920ceb6e22a3069961fbd8978aea0ba5d9476f" Namespace="calico-apiserver" Pod="calico-apiserver-66467cb67d-zdlnb" WorkloadEndpoint="localhost-k8s-calico--apiserver--66467cb67d--zdlnb-eth0" Feb 9 09:44:35.864738 env[1222]: 2024-02-09 09:44:35.845 [INFO][5058] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="e091ef4e1d1f0f9f1590dfffb7920ceb6e22a3069961fbd8978aea0ba5d9476f" Namespace="calico-apiserver" Pod="calico-apiserver-66467cb67d-zdlnb" WorkloadEndpoint="localhost-k8s-calico--apiserver--66467cb67d--zdlnb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66467cb67d--zdlnb-eth0", GenerateName:"calico-apiserver-66467cb67d-", Namespace:"calico-apiserver", SelfLink:"", UID:"fa53c5e3-3b0a-431a-96eb-d9d1f7d73aa2", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 44, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66467cb67d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e091ef4e1d1f0f9f1590dfffb7920ceb6e22a3069961fbd8978aea0ba5d9476f", Pod:"calico-apiserver-66467cb67d-zdlnb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibcec372f6fe", MAC:"e2:22:ea:d2:dc:fd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:44:35.864738 env[1222]: 2024-02-09 09:44:35.856 [INFO][5058] k8s.go 491: Wrote updated endpoint to datastore ContainerID="e091ef4e1d1f0f9f1590dfffb7920ceb6e22a3069961fbd8978aea0ba5d9476f" Namespace="calico-apiserver" Pod="calico-apiserver-66467cb67d-zdlnb" WorkloadEndpoint="localhost-k8s-calico--apiserver--66467cb67d--zdlnb-eth0" Feb 9 09:44:35.881885 env[1222]: time="2024-02-09T09:44:35.881823996Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:44:35.882060 env[1222]: time="2024-02-09T09:44:35.882036517Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:44:35.882145 env[1222]: time="2024-02-09T09:44:35.882124838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:44:35.882399 env[1222]: time="2024-02-09T09:44:35.882366439Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e091ef4e1d1f0f9f1590dfffb7920ceb6e22a3069961fbd8978aea0ba5d9476f pid=5107 runtime=io.containerd.runc.v2 Feb 9 09:44:35.882000 audit[5112]: NETFILTER_CFG table=filter:137 family=2 entries=59 op=nft_register_chain pid=5112 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 09:44:35.882000 audit[5112]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29292 a0=3 a1=fffff0a784e0 a2=0 a3=ffffbd7e3fa8 items=0 ppid=3367 pid=5112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:44:35.882000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 09:44:35.924991 systemd-resolved[1155]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 09:44:35.945162 env[1222]: time="2024-02-09T09:44:35.943684336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66467cb67d-zdlnb,Uid:fa53c5e3-3b0a-431a-96eb-d9d1f7d73aa2,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e091ef4e1d1f0f9f1590dfffb7920ceb6e22a3069961fbd8978aea0ba5d9476f\"" Feb 9 09:44:35.946609 env[1222]: time="2024-02-09T09:44:35.946568233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\""