Feb 9 18:34:14.715996 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 9 18:34:14.716017 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 17:24:35 -00 2024 Feb 9 18:34:14.716025 kernel: efi: EFI v2.70 by EDK II Feb 9 18:34:14.716031 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Feb 9 18:34:14.716036 kernel: random: crng init done Feb 9 18:34:14.716041 kernel: ACPI: Early table checksum verification disabled Feb 9 18:34:14.716048 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Feb 9 18:34:14.716055 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 9 18:34:14.716060 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:34:14.716065 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:34:14.716071 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:34:14.716076 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:34:14.716082 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:34:14.716087 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:34:14.716095 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:34:14.716101 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:34:14.716107 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:34:14.716112 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 9 18:34:14.716118 kernel: NUMA: Failed to initialise from firmware Feb 9 18:34:14.716124 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 18:34:14.716129 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Feb 9 18:34:14.716135 kernel: Zone ranges: Feb 9 18:34:14.716140 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 18:34:14.716147 kernel: DMA32 empty Feb 9 18:34:14.716152 kernel: Normal empty Feb 9 18:34:14.716158 kernel: Movable zone start for each node Feb 9 18:34:14.716164 kernel: Early memory node ranges Feb 9 18:34:14.716169 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Feb 9 18:34:14.716175 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Feb 9 18:34:14.716180 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Feb 9 18:34:14.716186 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Feb 9 18:34:14.716192 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Feb 9 18:34:14.716198 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Feb 9 18:34:14.716203 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Feb 9 18:34:14.716209 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 18:34:14.716215 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 9 18:34:14.716221 kernel: psci: probing for conduit method from ACPI. Feb 9 18:34:14.716227 kernel: psci: PSCIv1.1 detected in firmware. Feb 9 18:34:14.716232 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 18:34:14.716238 kernel: psci: Trusted OS migration not required Feb 9 18:34:14.716246 kernel: psci: SMC Calling Convention v1.1 Feb 9 18:34:14.716252 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 9 18:34:14.716259 kernel: ACPI: SRAT not present Feb 9 18:34:14.716265 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 18:34:14.716271 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 18:34:14.716277 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 9 18:34:14.716283 kernel: Detected PIPT I-cache on CPU0 Feb 9 18:34:14.716289 kernel: CPU features: detected: GIC system register CPU interface Feb 9 18:34:14.716296 kernel: CPU features: detected: Hardware dirty bit management Feb 9 18:34:14.716302 kernel: CPU features: detected: Spectre-v4 Feb 9 18:34:14.716307 kernel: CPU features: detected: Spectre-BHB Feb 9 18:34:14.716314 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 18:34:14.716320 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 18:34:14.716326 kernel: CPU features: detected: ARM erratum 1418040 Feb 9 18:34:14.716333 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 9 18:34:14.716339 kernel: Policy zone: DMA Feb 9 18:34:14.716346 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=680ffc8c0dfb23738bd19ec96ea37b5bbadfb5cebf23767d1d52c89a6d5c00b4 Feb 9 18:34:14.716363 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 18:34:14.716370 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 18:34:14.716376 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 18:34:14.716382 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 18:34:14.716389 kernel: Memory: 2459152K/2572288K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 113136K reserved, 0K cma-reserved) Feb 9 18:34:14.716396 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 9 18:34:14.716403 kernel: trace event string verifier disabled Feb 9 18:34:14.716409 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 18:34:14.716415 kernel: rcu: RCU event tracing is enabled. Feb 9 18:34:14.716421 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 9 18:34:14.716435 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 18:34:14.716441 kernel: Tracing variant of Tasks RCU enabled. Feb 9 18:34:14.716447 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 18:34:14.716453 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 9 18:34:14.716459 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 18:34:14.716465 kernel: GICv3: 256 SPIs implemented Feb 9 18:34:14.716473 kernel: GICv3: 0 Extended SPIs implemented Feb 9 18:34:14.716479 kernel: GICv3: Distributor has no Range Selector support Feb 9 18:34:14.716485 kernel: Root IRQ handler: gic_handle_irq Feb 9 18:34:14.716491 kernel: GICv3: 16 PPIs implemented Feb 9 18:34:14.716497 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 9 18:34:14.716503 kernel: ACPI: SRAT not present Feb 9 18:34:14.716509 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 9 18:34:14.716515 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Feb 9 18:34:14.716521 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Feb 9 18:34:14.716527 kernel: GICv3: using LPI property table @0x00000000400d0000 Feb 9 18:34:14.716533 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Feb 9 18:34:14.716539 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:34:14.716546 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 9 18:34:14.716552 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 9 18:34:14.716558 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 9 18:34:14.716564 kernel: arm-pv: using stolen time PV Feb 9 18:34:14.716571 kernel: Console: colour dummy device 80x25 Feb 9 18:34:14.716577 kernel: ACPI: Core revision 20210730 Feb 9 18:34:14.716583 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 9 18:34:14.716590 kernel: pid_max: default: 32768 minimum: 301 Feb 9 18:34:14.716596 kernel: LSM: Security Framework initializing Feb 9 18:34:14.716602 kernel: SELinux: Initializing. Feb 9 18:34:14.716609 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 18:34:14.716615 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 18:34:14.716622 kernel: rcu: Hierarchical SRCU implementation. Feb 9 18:34:14.716628 kernel: Platform MSI: ITS@0x8080000 domain created Feb 9 18:34:14.716634 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 9 18:34:14.716640 kernel: Remapping and enabling EFI services. Feb 9 18:34:14.716646 kernel: smp: Bringing up secondary CPUs ... Feb 9 18:34:14.716652 kernel: Detected PIPT I-cache on CPU1 Feb 9 18:34:14.716659 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 9 18:34:14.716666 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Feb 9 18:34:14.716673 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:34:14.716679 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 9 18:34:14.716685 kernel: Detected PIPT I-cache on CPU2 Feb 9 18:34:14.716692 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 9 18:34:14.716698 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Feb 9 18:34:14.716704 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:34:14.716710 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 9 18:34:14.716716 kernel: Detected PIPT I-cache on CPU3 Feb 9 18:34:14.716723 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 9 18:34:14.716730 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Feb 9 18:34:14.716737 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:34:14.716743 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 9 18:34:14.716749 kernel: smp: Brought up 1 node, 4 CPUs Feb 9 18:34:14.716759 kernel: SMP: Total of 4 processors activated. Feb 9 18:34:14.716767 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 18:34:14.716803 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 9 18:34:14.716811 kernel: CPU features: detected: Common not Private translations Feb 9 18:34:14.716818 kernel: CPU features: detected: CRC32 instructions Feb 9 18:34:14.716824 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 9 18:34:14.716831 kernel: CPU features: detected: LSE atomic instructions Feb 9 18:34:14.716838 kernel: CPU features: detected: Privileged Access Never Feb 9 18:34:14.716846 kernel: CPU features: detected: RAS Extension Support Feb 9 18:34:14.716852 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 9 18:34:14.716859 kernel: CPU: All CPU(s) started at EL1 Feb 9 18:34:14.716865 kernel: alternatives: patching kernel code Feb 9 18:34:14.716873 kernel: devtmpfs: initialized Feb 9 18:34:14.716879 kernel: KASLR enabled Feb 9 18:34:14.716886 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 18:34:14.716893 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 9 18:34:14.716899 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 18:34:14.716906 kernel: SMBIOS 3.0.0 present. Feb 9 18:34:14.716912 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Feb 9 18:34:14.716920 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 18:34:14.716926 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 18:34:14.716933 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 18:34:14.716940 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 18:34:14.716947 kernel: audit: initializing netlink subsys (disabled) Feb 9 18:34:14.716953 kernel: audit: type=2000 audit(0.031:1): state=initialized audit_enabled=0 res=1 Feb 9 18:34:14.716960 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 18:34:14.716967 kernel: cpuidle: using governor menu Feb 9 18:34:14.716973 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 18:34:14.716980 kernel: ASID allocator initialised with 32768 entries Feb 9 18:34:14.716986 kernel: ACPI: bus type PCI registered Feb 9 18:34:14.716993 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 18:34:14.717001 kernel: Serial: AMBA PL011 UART driver Feb 9 18:34:14.717007 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 18:34:14.717014 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 18:34:14.717020 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 18:34:14.717027 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 18:34:14.717033 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 18:34:14.717040 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 18:34:14.717047 kernel: ACPI: Added _OSI(Module Device) Feb 9 18:34:14.717053 kernel: ACPI: Added _OSI(Processor Device) Feb 9 18:34:14.717061 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 18:34:14.717067 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 18:34:14.717074 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 18:34:14.717080 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 18:34:14.717087 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 18:34:14.717093 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 18:34:14.717100 kernel: ACPI: Interpreter enabled Feb 9 18:34:14.717106 kernel: ACPI: Using GIC for interrupt routing Feb 9 18:34:14.717113 kernel: ACPI: MCFG table detected, 1 entries Feb 9 18:34:14.717121 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 9 18:34:14.717127 kernel: printk: console [ttyAMA0] enabled Feb 9 18:34:14.717134 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 18:34:14.717253 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 18:34:14.717317 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 9 18:34:14.717387 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 9 18:34:14.717454 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 9 18:34:14.717515 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 9 18:34:14.717524 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 9 18:34:14.717531 kernel: PCI host bridge to bus 0000:00 Feb 9 18:34:14.717596 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 9 18:34:14.717650 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 9 18:34:14.717702 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 9 18:34:14.717753 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 18:34:14.717825 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 9 18:34:14.717897 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 18:34:14.717960 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 9 18:34:14.718019 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 9 18:34:14.718079 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 9 18:34:14.718138 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 9 18:34:14.718199 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 9 18:34:14.718273 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 9 18:34:14.718325 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 9 18:34:14.718387 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 9 18:34:14.718449 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 9 18:34:14.718458 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 9 18:34:14.718464 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 9 18:34:14.718471 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 9 18:34:14.718480 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 9 18:34:14.718487 kernel: iommu: Default domain type: Translated Feb 9 18:34:14.718493 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 18:34:14.718500 kernel: vgaarb: loaded Feb 9 18:34:14.718506 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 18:34:14.718513 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it> Feb 9 18:34:14.718520 kernel: PTP clock support registered Feb 9 18:34:14.718526 kernel: Registered efivars operations Feb 9 18:34:14.718533 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 18:34:14.718540 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 18:34:14.718548 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 18:34:14.718554 kernel: pnp: PnP ACPI init Feb 9 18:34:14.718620 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 9 18:34:14.718630 kernel: pnp: PnP ACPI: found 1 devices Feb 9 18:34:14.718636 kernel: NET: Registered PF_INET protocol family Feb 9 18:34:14.718643 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 18:34:14.718650 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 18:34:14.718657 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 18:34:14.718665 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 18:34:14.718672 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 18:34:14.718679 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 18:34:14.718685 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 18:34:14.718692 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 18:34:14.718699 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 18:34:14.718705 kernel: PCI: CLS 0 bytes, default 64 Feb 9 18:34:14.718712 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 9 18:34:14.718720 kernel: kvm [1]: HYP mode not available Feb 9 18:34:14.718727 kernel: Initialise system trusted keyrings Feb 9 18:34:14.718733 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 18:34:14.718740 kernel: Key type asymmetric registered Feb 9 18:34:14.718746 kernel: Asymmetric key parser 'x509' registered Feb 9 18:34:14.718753 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 18:34:14.718759 kernel: io scheduler mq-deadline registered Feb 9 18:34:14.718766 kernel: io scheduler kyber registered Feb 9 18:34:14.718772 kernel: io scheduler bfq registered Feb 9 18:34:14.718779 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 9 18:34:14.718787 kernel: ACPI: button: Power Button [PWRB] Feb 9 18:34:14.718794 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 9 18:34:14.718853 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 9 18:34:14.718861 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 18:34:14.718868 kernel: thunder_xcv, ver 1.0 Feb 9 18:34:14.718874 kernel: thunder_bgx, ver 1.0 Feb 9 18:34:14.718881 kernel: nicpf, ver 1.0 Feb 9 18:34:14.718887 kernel: nicvf, ver 1.0 Feb 9 18:34:14.718954 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 18:34:14.719013 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T18:34:14 UTC (1707503654) Feb 9 18:34:14.719021 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 18:34:14.719028 kernel: NET: Registered PF_INET6 protocol family Feb 9 18:34:14.719035 kernel: Segment Routing with IPv6 Feb 9 18:34:14.719042 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 18:34:14.719049 kernel: NET: Registered PF_PACKET protocol family Feb 9 18:34:14.719055 kernel: Key type dns_resolver registered Feb 9 18:34:14.719062 kernel: registered taskstats version 1 Feb 9 18:34:14.719070 kernel: Loading compiled-in X.509 certificates Feb 9 18:34:14.719076 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 947a80114e81e2815f6db72a0d388260762488f9' Feb 9 18:34:14.719083 kernel: Key type .fscrypt registered Feb 9 18:34:14.719089 kernel: Key type fscrypt-provisioning registered Feb 9 18:34:14.719096 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 18:34:14.719102 kernel: ima: Allocated hash algorithm: sha1 Feb 9 18:34:14.719109 kernel: ima: No architecture policies found Feb 9 18:34:14.719115 kernel: Freeing unused kernel memory: 34688K Feb 9 18:34:14.719122 kernel: Run /init as init process Feb 9 18:34:14.719129 kernel: with arguments: Feb 9 18:34:14.719136 kernel: /init Feb 9 18:34:14.719142 kernel: with environment: Feb 9 18:34:14.719148 kernel: HOME=/ Feb 9 18:34:14.719154 kernel: TERM=linux Feb 9 18:34:14.719161 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 18:34:14.719169 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 18:34:14.719178 systemd[1]: Detected virtualization kvm. Feb 9 18:34:14.719186 systemd[1]: Detected architecture arm64. Feb 9 18:34:14.719193 systemd[1]: Running in initrd. Feb 9 18:34:14.719200 systemd[1]: No hostname configured, using default hostname. Feb 9 18:34:14.719207 systemd[1]: Hostname set to <localhost>. Feb 9 18:34:14.719214 systemd[1]: Initializing machine ID from VM UUID. Feb 9 18:34:14.719221 systemd[1]: Queued start job for default target initrd.target. Feb 9 18:34:14.719228 systemd[1]: Started systemd-ask-password-console.path. Feb 9 18:34:14.719234 systemd[1]: Reached target cryptsetup.target. Feb 9 18:34:14.719242 systemd[1]: Reached target paths.target. Feb 9 18:34:14.719249 systemd[1]: Reached target slices.target. Feb 9 18:34:14.719256 systemd[1]: Reached target swap.target. Feb 9 18:34:14.719263 systemd[1]: Reached target timers.target. Feb 9 18:34:14.719270 systemd[1]: Listening on iscsid.socket. Feb 9 18:34:14.719277 systemd[1]: Listening on iscsiuio.socket. Feb 9 18:34:14.719284 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 18:34:14.719292 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 18:34:14.719299 systemd[1]: Listening on systemd-journald.socket. Feb 9 18:34:14.719306 systemd[1]: Listening on systemd-networkd.socket. Feb 9 18:34:14.719313 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 18:34:14.719320 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 18:34:14.719327 systemd[1]: Reached target sockets.target. Feb 9 18:34:14.719334 systemd[1]: Starting kmod-static-nodes.service... Feb 9 18:34:14.719340 systemd[1]: Finished network-cleanup.service. Feb 9 18:34:14.719347 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 18:34:14.719364 systemd[1]: Starting systemd-journald.service... Feb 9 18:34:14.719372 systemd[1]: Starting systemd-modules-load.service... Feb 9 18:34:14.719379 systemd[1]: Starting systemd-resolved.service... Feb 9 18:34:14.719386 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 18:34:14.719393 systemd[1]: Finished kmod-static-nodes.service. Feb 9 18:34:14.719400 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 18:34:14.719406 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 18:34:14.719413 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 18:34:14.719421 kernel: audit: type=1130 audit(1707503654.714:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:14.719435 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 18:34:14.719445 systemd-journald[289]: Journal started Feb 9 18:34:14.719483 systemd-journald[289]: Runtime Journal (/run/log/journal/1dae3ad951e242749fbb8e36af04af32) is 6.0M, max 48.7M, 42.6M free. Feb 9 18:34:14.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:14.710058 systemd-modules-load[290]: Inserted module 'overlay' Feb 9 18:34:14.720882 systemd[1]: Started systemd-journald.service. Feb 9 18:34:14.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:14.723623 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 18:34:14.726943 kernel: audit: type=1130 audit(1707503654.721:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:14.726960 kernel: audit: type=1130 audit(1707503654.724:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:14.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:14.735844 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 18:34:14.736267 systemd-modules-load[290]: Inserted module 'br_netfilter' Feb 9 18:34:14.736960 kernel: Bridge firewalling registered Feb 9 18:34:14.738707 systemd-resolved[291]: Positive Trust Anchors: Feb 9 18:34:14.738720 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 18:34:14.738747 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 18:34:14.742773 systemd-resolved[291]: Defaulting to hostname 'linux'. Feb 9 18:34:14.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:14.743486 systemd[1]: Started systemd-resolved.service. Feb 9 18:34:14.749549 kernel: audit: type=1130 audit(1707503654.744:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:14.749577 kernel: audit: type=1130 audit(1707503654.747:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:14.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:14.744689 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 18:34:14.750845 kernel: SCSI subsystem initialized Feb 9 18:34:14.747475 systemd[1]: Reached target nss-lookup.target. Feb 9 18:34:14.750948 systemd[1]: Starting dracut-cmdline.service... Feb 9 18:34:14.758133 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 18:34:14.758165 kernel: device-mapper: uevent: version 1.0.3 Feb 9 18:34:14.758176 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 18:34:14.759239 dracut-cmdline[307]: dracut-dracut-053 Feb 9 18:34:14.760322 systemd-modules-load[290]: Inserted module 'dm_multipath' Feb 9 18:34:14.760988 systemd[1]: Finished systemd-modules-load.service. Feb 9 18:34:14.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:14.764063 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=680ffc8c0dfb23738bd19ec96ea37b5bbadfb5cebf23767d1d52c89a6d5c00b4 Feb 9 18:34:14.767668 kernel: audit: type=1130 audit(1707503654.761:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:14.762269 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:34:14.769895 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:34:14.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:14.773372 kernel: audit: type=1130 audit(1707503654.770:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:14.816374 kernel: Loading iSCSI transport class v2.0-870. Feb 9 18:34:14.824375 kernel: iscsi: registered transport (tcp) Feb 9 18:34:14.837379 kernel: iscsi: registered transport (qla4xxx) Feb 9 18:34:14.837391 kernel: QLogic iSCSI HBA Driver Feb 9 18:34:14.868174 systemd[1]: Finished dracut-cmdline.service. Feb 9 18:34:14.873436 kernel: audit: type=1130 audit(1707503654.868:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:14.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:14.869485 systemd[1]: Starting dracut-pre-udev.service... Feb 9 18:34:14.911373 kernel: raid6: neonx8 gen() 13753 MB/s Feb 9 18:34:14.928370 kernel: raid6: neonx8 xor() 10802 MB/s Feb 9 18:34:14.945365 kernel: raid6: neonx4 gen() 13542 MB/s Feb 9 18:34:14.962368 kernel: raid6: neonx4 xor() 11288 MB/s Feb 9 18:34:14.979365 kernel: raid6: neonx2 gen() 12893 MB/s Feb 9 18:34:14.996372 kernel: raid6: neonx2 xor() 10372 MB/s Feb 9 18:34:15.013365 kernel: raid6: neonx1 gen() 10458 MB/s Feb 9 18:34:15.030373 kernel: raid6: neonx1 xor() 8763 MB/s Feb 9 18:34:15.047365 kernel: raid6: int64x8 gen() 6279 MB/s Feb 9 18:34:15.064365 kernel: raid6: int64x8 xor() 3539 MB/s Feb 9 18:34:15.081393 kernel: raid6: int64x4 gen() 7233 MB/s Feb 9 18:34:15.098369 kernel: raid6: int64x4 xor() 3832 MB/s Feb 9 18:34:15.115376 kernel: raid6: int64x2 gen() 6142 MB/s Feb 9 18:34:15.132377 kernel: raid6: int64x2 xor() 3313 MB/s Feb 9 18:34:15.149377 kernel: raid6: int64x1 gen() 5022 MB/s Feb 9 18:34:15.166557 kernel: raid6: int64x1 xor() 2631 MB/s Feb 9 18:34:15.166592 kernel: raid6: using algorithm neonx8 gen() 13753 MB/s Feb 9 18:34:15.166609 kernel: raid6: .... xor() 10802 MB/s, rmw enabled Feb 9 18:34:15.166623 kernel: raid6: using neon recovery algorithm Feb 9 18:34:15.177648 kernel: xor: measuring software checksum speed Feb 9 18:34:15.177675 kernel: 8regs : 17293 MB/sec Feb 9 18:34:15.178491 kernel: 32regs : 20755 MB/sec Feb 9 18:34:15.179676 kernel: arm64_neon : 27901 MB/sec Feb 9 18:34:15.179688 kernel: xor: using function: arm64_neon (27901 MB/sec) Feb 9 18:34:15.246393 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 18:34:15.256892 systemd[1]: Finished dracut-pre-udev.service. Feb 9 18:34:15.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:15.259000 audit: BPF prog-id=7 op=LOAD Feb 9 18:34:15.259000 audit: BPF prog-id=8 op=LOAD Feb 9 18:34:15.260373 kernel: audit: type=1130 audit(1707503655.257:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:15.260407 systemd[1]: Starting systemd-udevd.service... Feb 9 18:34:15.275122 systemd-udevd[492]: Using default interface naming scheme 'v252'. Feb 9 18:34:15.278476 systemd[1]: Started systemd-udevd.service. Feb 9 18:34:15.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:15.279942 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 18:34:15.291883 dracut-pre-trigger[497]: rd.md=0: removing MD RAID activation Feb 9 18:34:15.318649 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 18:34:15.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:15.320220 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 18:34:15.352491 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 18:34:15.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:15.380494 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 9 18:34:15.384610 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 18:34:15.384647 kernel: GPT:9289727 != 19775487 Feb 9 18:34:15.384656 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 18:34:15.385409 kernel: GPT:9289727 != 19775487 Feb 9 18:34:15.385432 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 18:34:15.386371 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 18:34:15.398375 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (550) Feb 9 18:34:15.400535 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 18:34:15.403343 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 18:34:15.404339 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 18:34:15.410623 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 18:34:15.415936 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 18:34:15.417602 systemd[1]: Starting disk-uuid.service... Feb 9 18:34:15.423486 disk-uuid[562]: Primary Header is updated. Feb 9 18:34:15.423486 disk-uuid[562]: Secondary Entries is updated. Feb 9 18:34:15.423486 disk-uuid[562]: Secondary Header is updated. Feb 9 18:34:15.426383 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 18:34:16.438384 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 18:34:16.438480 disk-uuid[563]: The operation has completed successfully. Feb 9 18:34:16.463017 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 18:34:16.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:16.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:16.463119 systemd[1]: Finished disk-uuid.service. Feb 9 18:34:16.467327 systemd[1]: Starting verity-setup.service... Feb 9 18:34:16.483381 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 18:34:16.509593 systemd[1]: Found device dev-mapper-usr.device. Feb 9 18:34:16.511163 systemd[1]: Mounting sysusr-usr.mount... Feb 9 18:34:16.511956 systemd[1]: Finished verity-setup.service. Feb 9 18:34:16.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:16.563014 systemd[1]: Mounted sysusr-usr.mount. Feb 9 18:34:16.564123 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 18:34:16.563825 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 18:34:16.564463 systemd[1]: Starting ignition-setup.service... Feb 9 18:34:16.566271 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 18:34:16.572639 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 18:34:16.572673 kernel: BTRFS info (device vda6): using free space tree Feb 9 18:34:16.572682 kernel: BTRFS info (device vda6): has skinny extents Feb 9 18:34:16.579883 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 18:34:16.586026 systemd[1]: Finished ignition-setup.service. Feb 9 18:34:16.587533 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 18:34:16.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:16.642606 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 18:34:16.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:16.643000 audit: BPF prog-id=9 op=LOAD Feb 9 18:34:16.644565 systemd[1]: Starting systemd-networkd.service... Feb 9 18:34:16.663593 systemd-networkd[739]: lo: Link UP Feb 9 18:34:16.664222 systemd-networkd[739]: lo: Gained carrier Feb 9 18:34:16.665561 systemd-networkd[739]: Enumeration completed Feb 9 18:34:16.666286 systemd[1]: Started systemd-networkd.service. Feb 9 18:34:16.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:16.667016 systemd[1]: Reached target network.target. Feb 9 18:34:16.668640 systemd[1]: Starting iscsiuio.service... Feb 9 18:34:16.670014 systemd-networkd[739]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:34:16.671724 systemd-networkd[739]: eth0: Link UP Feb 9 18:34:16.671732 systemd-networkd[739]: eth0: Gained carrier Feb 9 18:34:16.678248 ignition[650]: Ignition 2.14.0 Feb 9 18:34:16.678258 ignition[650]: Stage: fetch-offline Feb 9 18:34:16.678304 ignition[650]: no configs at "/usr/lib/ignition/base.d" Feb 9 18:34:16.678313 ignition[650]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:34:16.678469 ignition[650]: parsed url from cmdline: "" Feb 9 18:34:16.678472 ignition[650]: no config URL provided Feb 9 18:34:16.678477 ignition[650]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 18:34:16.681754 systemd[1]: Started iscsiuio.service. Feb 9 18:34:16.678484 ignition[650]: no config at "/usr/lib/ignition/user.ign" Feb 9 18:34:16.678502 ignition[650]: op(1): [started] loading QEMU firmware config module Feb 9 18:34:16.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:16.685048 systemd[1]: Starting iscsid.service... Feb 9 18:34:16.678506 ignition[650]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 9 18:34:16.685062 ignition[650]: op(1): [finished] loading QEMU firmware config module Feb 9 18:34:16.685083 ignition[650]: QEMU firmware config was not found. Ignoring... Feb 9 18:34:16.688312 iscsid[745]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 18:34:16.688312 iscsid[745]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.<reversed domain name>[:identifier]. Feb 9 18:34:16.688312 iscsid[745]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 18:34:16.688312 iscsid[745]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 18:34:16.688312 iscsid[745]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 18:34:16.688312 iscsid[745]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 18:34:16.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:16.689433 systemd-networkd[739]: eth0: DHCPv4 address 10.0.0.89/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 18:34:16.691934 systemd[1]: Started iscsid.service. Feb 9 18:34:16.695478 systemd[1]: Starting dracut-initqueue.service... Feb 9 18:34:16.705095 systemd[1]: Finished dracut-initqueue.service. Feb 9 18:34:16.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:16.706063 systemd[1]: Reached target remote-fs-pre.target. Feb 9 18:34:16.707429 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 18:34:16.708858 systemd[1]: Reached target remote-fs.target. Feb 9 18:34:16.710894 systemd[1]: Starting dracut-pre-mount.service... Feb 9 18:34:16.717952 systemd[1]: Finished dracut-pre-mount.service. Feb 9 18:34:16.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:16.768168 ignition[650]: parsing config with SHA512: 590d7bbcde6a96edae999a49a366610c526fcdacb831be33f6ac14a4f2479d99f336671ec98f76b608f1c930654f15b1282aa5ec0c1ec8c1f7be8a48192a2cd3 Feb 9 18:34:16.806128 systemd-resolved[291]: Detected conflict on linux IN A 10.0.0.89 Feb 9 18:34:16.806145 systemd-resolved[291]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. Feb 9 18:34:16.806327 unknown[650]: fetched base config from "system" Feb 9 18:34:16.807048 ignition[650]: fetch-offline: fetch-offline passed Feb 9 18:34:16.806334 unknown[650]: fetched user config from "qemu" Feb 9 18:34:16.807108 ignition[650]: Ignition finished successfully Feb 9 18:34:16.810954 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 18:34:16.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:16.811739 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 18:34:16.812378 systemd[1]: Starting ignition-kargs.service... Feb 9 18:34:16.820350 ignition[760]: Ignition 2.14.0 Feb 9 18:34:16.820383 ignition[760]: Stage: kargs Feb 9 18:34:16.820478 ignition[760]: no configs at "/usr/lib/ignition/base.d" Feb 9 18:34:16.820488 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:34:16.821479 ignition[760]: kargs: kargs passed Feb 9 18:34:16.821519 ignition[760]: Ignition finished successfully Feb 9 18:34:16.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:16.823648 systemd[1]: Finished ignition-kargs.service. Feb 9 18:34:16.825057 systemd[1]: Starting ignition-disks.service... Feb 9 18:34:16.831320 ignition[766]: Ignition 2.14.0 Feb 9 18:34:16.831329 ignition[766]: Stage: disks Feb 9 18:34:16.831440 ignition[766]: no configs at "/usr/lib/ignition/base.d" Feb 9 18:34:16.831449 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:34:16.833654 systemd[1]: Finished ignition-disks.service. Feb 9 18:34:16.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:16.832522 ignition[766]: disks: disks passed Feb 9 18:34:16.834867 systemd[1]: Reached target initrd-root-device.target. Feb 9 18:34:16.832563 ignition[766]: Ignition finished successfully Feb 9 18:34:16.835816 systemd[1]: Reached target local-fs-pre.target. Feb 9 18:34:16.836720 systemd[1]: Reached target local-fs.target. Feb 9 18:34:16.837814 systemd[1]: Reached target sysinit.target. Feb 9 18:34:16.838740 systemd[1]: Reached target basic.target. Feb 9 18:34:16.840373 systemd[1]: Starting systemd-fsck-root.service... Feb 9 18:34:16.850089 systemd-fsck[774]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 9 18:34:16.855351 systemd[1]: Finished systemd-fsck-root.service. Feb 9 18:34:16.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:16.858028 systemd[1]: Mounting sysroot.mount... Feb 9 18:34:16.864088 systemd[1]: Mounted sysroot.mount. Feb 9 18:34:16.865301 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 18:34:16.864865 systemd[1]: Reached target initrd-root-fs.target. Feb 9 18:34:16.867390 systemd[1]: Mounting sysroot-usr.mount... Feb 9 18:34:16.868216 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 18:34:16.868254 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 18:34:16.868278 systemd[1]: Reached target ignition-diskful.target. Feb 9 18:34:16.869976 systemd[1]: Mounted sysroot-usr.mount. Feb 9 18:34:16.871711 systemd[1]: Starting initrd-setup-root.service... Feb 9 18:34:16.875592 initrd-setup-root[784]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 18:34:16.878796 initrd-setup-root[792]: cut: /sysroot/etc/group: No such file or directory Feb 9 18:34:16.881516 initrd-setup-root[800]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 18:34:16.885167 initrd-setup-root[808]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 18:34:16.909397 systemd[1]: Finished initrd-setup-root.service. Feb 9 18:34:16.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:16.910851 systemd[1]: Starting ignition-mount.service... Feb 9 18:34:16.912098 systemd[1]: Starting sysroot-boot.service... Feb 9 18:34:16.915737 bash[825]: umount: /sysroot/usr/share/oem: not mounted. Feb 9 18:34:16.924657 ignition[827]: INFO : Ignition 2.14.0 Feb 9 18:34:16.925557 ignition[827]: INFO : Stage: mount Feb 9 18:34:16.926270 ignition[827]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 18:34:16.926270 ignition[827]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:34:16.928285 ignition[827]: INFO : mount: mount passed Feb 9 18:34:16.928285 ignition[827]: INFO : Ignition finished successfully Feb 9 18:34:16.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:16.928216 systemd[1]: Finished ignition-mount.service. Feb 9 18:34:16.931591 systemd[1]: Finished sysroot-boot.service. Feb 9 18:34:16.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:17.519144 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 18:34:17.529766 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (836) Feb 9 18:34:17.529801 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 18:34:17.529811 kernel: BTRFS info (device vda6): using free space tree Feb 9 18:34:17.530732 kernel: BTRFS info (device vda6): has skinny extents Feb 9 18:34:17.536884 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 18:34:17.538393 systemd[1]: Starting ignition-files.service... Feb 9 18:34:17.552600 ignition[856]: INFO : Ignition 2.14.0 Feb 9 18:34:17.552600 ignition[856]: INFO : Stage: files Feb 9 18:34:17.553758 ignition[856]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 18:34:17.553758 ignition[856]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:34:17.555254 ignition[856]: DEBUG : files: compiled without relabeling support, skipping Feb 9 18:34:17.558523 ignition[856]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 18:34:17.558523 ignition[856]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 18:34:17.560690 ignition[856]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 18:34:17.560690 ignition[856]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 18:34:17.562516 ignition[856]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 18:34:17.562462 unknown[856]: wrote ssh authorized keys file for user: core Feb 9 18:34:17.565159 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 18:34:17.565159 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 9 18:34:17.612390 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 18:34:17.651461 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 18:34:17.651461 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 18:34:17.654450 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Feb 9 18:34:17.963663 systemd-networkd[739]: eth0: Gained IPv6LL Feb 9 18:34:18.038390 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 18:34:18.225325 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Feb 9 18:34:18.225325 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 18:34:18.228876 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 18:34:18.228876 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Feb 9 18:34:18.454371 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 18:34:18.571351 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Feb 9 18:34:18.571351 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 18:34:18.575125 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 18:34:18.575125 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 18:34:18.575125 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 18:34:18.575125 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Feb 9 18:34:18.622726 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 18:34:19.128619 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Feb 9 18:34:19.130965 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 18:34:19.130965 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 18:34:19.130965 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubectl: attempt #1 Feb 9 18:34:19.178429 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 18:34:20.895526 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 3672fda0beebbbd636a2088f427463cbad32683ea4fbb1df61650552e63846b6a47db803ccb70c3db0a8f24746a23a5632bdc15a3fb78f4f7d833e7f86763c2a Feb 9 18:34:20.897824 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 18:34:20.897824 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 18:34:20.897824 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Feb 9 18:34:20.943742 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 9 18:34:24.798543 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Feb 9 18:34:24.801149 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 18:34:24.801149 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 18:34:24.801149 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 18:34:24.801149 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 9 18:34:24.801149 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 18:34:24.801149 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 18:34:24.801149 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 18:34:24.801149 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 18:34:24.801149 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 18:34:24.801149 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 18:34:24.801149 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 18:34:24.801149 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 18:34:24.801149 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 18:34:24.801149 ignition[856]: INFO : files: op(10): [started] processing unit "prepare-cni-plugins.service" Feb 9 18:34:24.801149 ignition[856]: INFO : files: op(10): op(11): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 18:34:24.801149 ignition[856]: INFO : files: op(10): op(11): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 18:34:24.801149 ignition[856]: INFO : files: op(10): [finished] processing unit "prepare-cni-plugins.service" Feb 9 18:34:24.825320 ignition[856]: INFO : files: op(12): [started] processing unit "prepare-critools.service" Feb 9 18:34:24.825320 ignition[856]: INFO : files: op(12): op(13): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 18:34:24.825320 ignition[856]: INFO : files: op(12): op(13): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 18:34:24.825320 ignition[856]: INFO : files: op(12): [finished] processing unit "prepare-critools.service" Feb 9 18:34:24.825320 ignition[856]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Feb 9 18:34:24.825320 ignition[856]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 18:34:24.825320 ignition[856]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 18:34:24.825320 ignition[856]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Feb 9 18:34:24.825320 ignition[856]: INFO : files: op(16): [started] processing unit "coreos-metadata.service" Feb 9 18:34:24.825320 ignition[856]: INFO : files: op(16): op(17): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 18:34:24.825320 ignition[856]: INFO : files: op(16): op(17): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 18:34:24.825320 ignition[856]: INFO : files: op(16): [finished] processing unit "coreos-metadata.service" Feb 9 18:34:24.825320 ignition[856]: INFO : files: op(18): [started] processing unit "containerd.service" Feb 9 18:34:24.825320 ignition[856]: INFO : files: op(18): op(19): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 18:34:24.825320 ignition[856]: INFO : files: op(18): op(19): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 18:34:24.825320 ignition[856]: INFO : files: op(18): [finished] processing unit "containerd.service" Feb 9 18:34:24.825320 ignition[856]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 18:34:24.825320 ignition[856]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 18:34:24.852878 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 9 18:34:24.852903 kernel: audit: type=1130 audit(1707503664.845:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.852968 ignition[856]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-critools.service" Feb 9 18:34:24.852968 ignition[856]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 18:34:24.852968 ignition[856]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-helm.service" Feb 9 18:34:24.852968 ignition[856]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 18:34:24.852968 ignition[856]: INFO : files: op(1d): [started] setting preset to disabled for "coreos-metadata.service" Feb 9 18:34:24.852968 ignition[856]: INFO : files: op(1d): op(1e): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 18:34:24.852968 ignition[856]: INFO : files: op(1d): op(1e): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 18:34:24.852968 ignition[856]: INFO : files: op(1d): [finished] setting preset to disabled for "coreos-metadata.service" Feb 9 18:34:24.852968 ignition[856]: INFO : files: createResultFile: createFiles: op(1f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 18:34:24.852968 ignition[856]: INFO : files: createResultFile: createFiles: op(1f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 18:34:24.852968 ignition[856]: INFO : files: files passed Feb 9 18:34:24.852968 ignition[856]: INFO : Ignition finished successfully Feb 9 18:34:24.874874 kernel: audit: type=1130 audit(1707503664.855:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.874895 kernel: audit: type=1131 audit(1707503664.855:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.874906 kernel: audit: type=1130 audit(1707503664.857:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.843815 systemd[1]: Finished ignition-files.service. Feb 9 18:34:24.846927 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 18:34:24.876560 initrd-setup-root-after-ignition[881]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 9 18:34:24.850242 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 18:34:24.879087 initrd-setup-root-after-ignition[884]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 18:34:24.884380 kernel: audit: type=1130 audit(1707503664.879:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.884406 kernel: audit: type=1131 audit(1707503664.879:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.850898 systemd[1]: Starting ignition-quench.service... Feb 9 18:34:24.854104 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 18:34:24.854173 systemd[1]: Finished ignition-quench.service. Feb 9 18:34:24.856380 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 18:34:24.858244 systemd[1]: Reached target ignition-complete.target. Feb 9 18:34:24.866036 systemd[1]: Starting initrd-parse-etc.service... Feb 9 18:34:24.878023 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 18:34:24.878104 systemd[1]: Finished initrd-parse-etc.service. Feb 9 18:34:24.879970 systemd[1]: Reached target initrd-fs.target. Feb 9 18:34:24.885136 systemd[1]: Reached target initrd.target. Feb 9 18:34:24.886369 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 18:34:24.887031 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 18:34:24.896564 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 18:34:24.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.898035 systemd[1]: Starting initrd-cleanup.service... Feb 9 18:34:24.900655 kernel: audit: type=1130 audit(1707503664.896:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.905213 systemd[1]: Stopped target network.target. Feb 9 18:34:24.906054 systemd[1]: Stopped target nss-lookup.target. Feb 9 18:34:24.907183 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 18:34:24.908422 systemd[1]: Stopped target timers.target. Feb 9 18:34:24.909527 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 18:34:24.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.909627 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 18:34:24.914146 kernel: audit: type=1131 audit(1707503664.910:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.910726 systemd[1]: Stopped target initrd.target. Feb 9 18:34:24.913768 systemd[1]: Stopped target basic.target. Feb 9 18:34:24.914891 systemd[1]: Stopped target ignition-complete.target. Feb 9 18:34:24.916103 systemd[1]: Stopped target ignition-diskful.target. Feb 9 18:34:24.917279 systemd[1]: Stopped target initrd-root-device.target. Feb 9 18:34:24.918594 systemd[1]: Stopped target remote-fs.target. Feb 9 18:34:24.919796 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 18:34:24.921066 systemd[1]: Stopped target sysinit.target. Feb 9 18:34:24.922312 systemd[1]: Stopped target local-fs.target. Feb 9 18:34:24.923492 systemd[1]: Stopped target local-fs-pre.target. Feb 9 18:34:24.924648 systemd[1]: Stopped target swap.target. Feb 9 18:34:24.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.925721 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 18:34:24.930384 kernel: audit: type=1131 audit(1707503664.926:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.925821 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 18:34:24.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.933367 kernel: audit: type=1131 audit(1707503664.930:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.927023 systemd[1]: Stopped target cryptsetup.target. Feb 9 18:34:24.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.929850 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 18:34:24.929947 systemd[1]: Stopped dracut-initqueue.service. Feb 9 18:34:24.931220 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 18:34:24.931313 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 18:34:24.934290 systemd[1]: Stopped target paths.target. Feb 9 18:34:24.935471 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 18:34:24.940406 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 18:34:24.941950 systemd[1]: Stopped target slices.target. Feb 9 18:34:24.942764 systemd[1]: Stopped target sockets.target. Feb 9 18:34:24.943875 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 18:34:24.943945 systemd[1]: Closed iscsid.socket. Feb 9 18:34:24.944954 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 18:34:24.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.945014 systemd[1]: Closed iscsiuio.socket. Feb 9 18:34:24.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.946145 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 18:34:24.946240 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 18:34:24.947412 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 18:34:24.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.947522 systemd[1]: Stopped ignition-files.service. Feb 9 18:34:24.949349 systemd[1]: Stopping ignition-mount.service... Feb 9 18:34:24.950457 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 18:34:24.956198 ignition[897]: INFO : Ignition 2.14.0 Feb 9 18:34:24.956198 ignition[897]: INFO : Stage: umount Feb 9 18:34:24.956198 ignition[897]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 18:34:24.956198 ignition[897]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:34:24.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.950586 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 18:34:24.963078 ignition[897]: INFO : umount: umount passed Feb 9 18:34:24.963078 ignition[897]: INFO : Ignition finished successfully Feb 9 18:34:24.952657 systemd[1]: Stopping sysroot-boot.service... Feb 9 18:34:24.955668 systemd[1]: Stopping systemd-networkd.service... Feb 9 18:34:24.957082 systemd[1]: Stopping systemd-resolved.service... Feb 9 18:34:24.957859 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 18:34:24.957970 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 18:34:24.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.959113 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 18:34:24.959266 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 18:34:24.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.960600 systemd-networkd[739]: eth0: DHCPv6 lease lost Feb 9 18:34:24.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.965573 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 18:34:24.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.966281 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 18:34:24.966380 systemd[1]: Stopped systemd-networkd.service. Feb 9 18:34:24.973000 audit: BPF prog-id=9 op=UNLOAD Feb 9 18:34:24.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.967873 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 18:34:24.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.967954 systemd[1]: Stopped systemd-resolved.service. Feb 9 18:34:24.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.969656 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 18:34:24.969730 systemd[1]: Stopped ignition-mount.service. Feb 9 18:34:24.970660 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 18:34:24.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.970728 systemd[1]: Stopped sysroot-boot.service. Feb 9 18:34:24.980000 audit: BPF prog-id=6 op=UNLOAD Feb 9 18:34:24.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.971891 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 18:34:24.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.971970 systemd[1]: Closed systemd-networkd.socket. Feb 9 18:34:24.972657 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 18:34:24.972692 systemd[1]: Stopped ignition-disks.service. Feb 9 18:34:24.973904 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 18:34:24.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.973950 systemd[1]: Stopped ignition-kargs.service. Feb 9 18:34:24.974579 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 18:34:24.974614 systemd[1]: Stopped ignition-setup.service. Feb 9 18:34:24.975573 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 18:34:24.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.975607 systemd[1]: Stopped initrd-setup-root.service. Feb 9 18:34:24.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.977342 systemd[1]: Stopping network-cleanup.service... Feb 9 18:34:24.978421 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 18:34:24.978471 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 18:34:24.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.979550 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 18:34:24.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.979586 systemd[1]: Stopped systemd-sysctl.service. Feb 9 18:34:24.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.981218 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 18:34:24.981255 systemd[1]: Stopped systemd-modules-load.service. Feb 9 18:34:24.982223 systemd[1]: Stopping systemd-udevd.service... Feb 9 18:34:25.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.984219 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 18:34:24.984704 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 18:34:24.984782 systemd[1]: Finished initrd-cleanup.service. Feb 9 18:34:24.988237 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 18:34:25.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:25.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:24.988345 systemd[1]: Stopped systemd-udevd.service. Feb 9 18:34:24.989651 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 18:34:24.989724 systemd[1]: Stopped network-cleanup.service. Feb 9 18:34:24.990919 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 18:34:24.990950 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 18:34:24.992067 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 18:34:24.992096 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 18:34:24.993290 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 18:34:24.993331 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 18:34:24.994717 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 18:34:24.994760 systemd[1]: Stopped dracut-cmdline.service. Feb 9 18:34:24.995971 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 18:34:24.996008 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 18:34:24.997956 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 18:34:24.999188 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 18:34:25.014000 audit: BPF prog-id=8 op=UNLOAD Feb 9 18:34:25.014000 audit: BPF prog-id=7 op=UNLOAD Feb 9 18:34:24.999239 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 18:34:25.002800 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 18:34:25.016000 audit: BPF prog-id=5 op=UNLOAD Feb 9 18:34:25.016000 audit: BPF prog-id=4 op=UNLOAD Feb 9 18:34:25.016000 audit: BPF prog-id=3 op=UNLOAD Feb 9 18:34:25.002876 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 18:34:25.004139 systemd[1]: Reached target initrd-switch-root.target. Feb 9 18:34:25.006065 systemd[1]: Starting initrd-switch-root.service... Feb 9 18:34:25.012292 systemd[1]: Switching root. Feb 9 18:34:25.030635 iscsid[745]: iscsid shutting down. Feb 9 18:34:25.031124 systemd-journald[289]: Journal stopped Feb 9 18:34:27.319044 systemd-journald[289]: Received SIGTERM from PID 1 (systemd). Feb 9 18:34:27.319098 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 18:34:27.319113 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 18:34:27.319127 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 18:34:27.319137 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 18:34:27.319147 kernel: SELinux: policy capability open_perms=1 Feb 9 18:34:27.319157 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 18:34:27.319167 kernel: SELinux: policy capability always_check_network=0 Feb 9 18:34:27.319176 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 18:34:27.319186 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 18:34:27.319200 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 18:34:27.319210 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 18:34:27.319227 systemd[1]: Successfully loaded SELinux policy in 35.457ms. Feb 9 18:34:27.319245 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.661ms. Feb 9 18:34:27.319257 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 18:34:27.319268 systemd[1]: Detected virtualization kvm. Feb 9 18:34:27.319278 systemd[1]: Detected architecture arm64. Feb 9 18:34:27.319289 systemd[1]: Detected first boot. Feb 9 18:34:27.319300 systemd[1]: Initializing machine ID from VM UUID. Feb 9 18:34:27.319311 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 18:34:27.319321 systemd[1]: Populated /etc with preset unit settings. Feb 9 18:34:27.319332 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:34:27.319344 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:34:27.319365 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:34:27.319377 systemd[1]: Queued start job for default target multi-user.target. Feb 9 18:34:27.319394 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 9 18:34:27.319406 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 18:34:27.319416 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 18:34:27.319426 systemd[1]: Created slice system-getty.slice. Feb 9 18:34:27.319436 systemd[1]: Created slice system-modprobe.slice. Feb 9 18:34:27.319447 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 18:34:27.319457 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 18:34:27.319468 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 18:34:27.319481 systemd[1]: Created slice user.slice. Feb 9 18:34:27.319491 systemd[1]: Started systemd-ask-password-console.path. Feb 9 18:34:27.319502 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 18:34:27.319512 systemd[1]: Set up automount boot.automount. Feb 9 18:34:27.319523 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 18:34:27.319534 systemd[1]: Reached target integritysetup.target. Feb 9 18:34:27.319544 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 18:34:27.319555 systemd[1]: Reached target remote-fs.target. Feb 9 18:34:27.319567 systemd[1]: Reached target slices.target. Feb 9 18:34:27.319578 systemd[1]: Reached target swap.target. Feb 9 18:34:27.319589 systemd[1]: Reached target torcx.target. Feb 9 18:34:27.319600 systemd[1]: Reached target veritysetup.target. Feb 9 18:34:27.319610 systemd[1]: Listening on systemd-coredump.socket. Feb 9 18:34:27.319621 systemd[1]: Listening on systemd-initctl.socket. Feb 9 18:34:27.319634 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 18:34:27.319644 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 18:34:27.319654 systemd[1]: Listening on systemd-journald.socket. Feb 9 18:34:27.319664 systemd[1]: Listening on systemd-networkd.socket. Feb 9 18:34:27.319677 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 18:34:27.319687 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 18:34:27.319698 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 18:34:27.319709 systemd[1]: Mounting dev-hugepages.mount... Feb 9 18:34:27.319720 systemd[1]: Mounting dev-mqueue.mount... Feb 9 18:34:27.319730 systemd[1]: Mounting media.mount... Feb 9 18:34:27.319741 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 18:34:27.319752 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 18:34:27.319762 systemd[1]: Mounting tmp.mount... Feb 9 18:34:27.319774 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 18:34:27.319784 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 18:34:27.319795 systemd[1]: Starting kmod-static-nodes.service... Feb 9 18:34:27.319806 systemd[1]: Starting modprobe@configfs.service... Feb 9 18:34:27.319816 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 18:34:27.319827 systemd[1]: Starting modprobe@drm.service... Feb 9 18:34:27.319837 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 18:34:27.319847 systemd[1]: Starting modprobe@fuse.service... Feb 9 18:34:27.319857 systemd[1]: Starting modprobe@loop.service... Feb 9 18:34:27.319869 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 18:34:27.319879 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 9 18:34:27.319890 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 9 18:34:27.319900 systemd[1]: Starting systemd-journald.service... Feb 9 18:34:27.319911 systemd[1]: Starting systemd-modules-load.service... Feb 9 18:34:27.319921 systemd[1]: Starting systemd-network-generator.service... Feb 9 18:34:27.319932 systemd[1]: Starting systemd-remount-fs.service... Feb 9 18:34:27.319943 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 18:34:27.319953 kernel: fuse: init (API version 7.34) Feb 9 18:34:27.319964 systemd[1]: Mounted dev-hugepages.mount. Feb 9 18:34:27.319974 systemd[1]: Mounted dev-mqueue.mount. Feb 9 18:34:27.319985 systemd[1]: Mounted media.mount. Feb 9 18:34:27.319996 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 18:34:27.320007 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 18:34:27.320019 systemd[1]: Mounted tmp.mount. Feb 9 18:34:27.320029 systemd[1]: Finished kmod-static-nodes.service. Feb 9 18:34:27.320040 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 18:34:27.320051 systemd[1]: Finished modprobe@configfs.service. Feb 9 18:34:27.320061 kernel: loop: module loaded Feb 9 18:34:27.320074 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 18:34:27.320086 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 18:34:27.320097 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 18:34:27.320107 systemd[1]: Finished modprobe@drm.service. Feb 9 18:34:27.320118 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 18:34:27.320128 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 18:34:27.320138 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 18:34:27.320148 systemd[1]: Finished modprobe@fuse.service. Feb 9 18:34:27.320159 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 18:34:27.320170 systemd[1]: Finished modprobe@loop.service. Feb 9 18:34:27.320182 systemd[1]: Finished systemd-modules-load.service. Feb 9 18:34:27.320192 systemd[1]: Finished systemd-network-generator.service. Feb 9 18:34:27.320204 systemd-journald[1023]: Journal started Feb 9 18:34:27.320244 systemd-journald[1023]: Runtime Journal (/run/log/journal/1dae3ad951e242749fbb8e36af04af32) is 6.0M, max 48.7M, 42.6M free. Feb 9 18:34:27.191000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 18:34:27.191000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 18:34:27.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.313000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 18:34:27.313000 audit[1023]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffc2e928e0 a2=4000 a3=1 items=0 ppid=1 pid=1023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:27.313000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 18:34:27.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.321676 systemd[1]: Finished systemd-remount-fs.service. Feb 9 18:34:27.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.323646 systemd[1]: Started systemd-journald.service. Feb 9 18:34:27.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.324348 systemd[1]: Reached target network-pre.target. Feb 9 18:34:27.326323 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 18:34:27.328419 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 18:34:27.329129 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 18:34:27.333841 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 18:34:27.335615 systemd[1]: Starting systemd-journal-flush.service... Feb 9 18:34:27.336325 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 18:34:27.337481 systemd[1]: Starting systemd-random-seed.service... Feb 9 18:34:27.338288 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 18:34:27.339537 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:34:27.343426 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 18:34:27.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.344994 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 18:34:27.346620 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 18:34:27.351020 systemd[1]: Starting systemd-udev-settle.service... Feb 9 18:34:27.352882 systemd-journald[1023]: Time spent on flushing to /var/log/journal/1dae3ad951e242749fbb8e36af04af32 is 13.346ms for 961 entries. Feb 9 18:34:27.352882 systemd-journald[1023]: System Journal (/var/log/journal/1dae3ad951e242749fbb8e36af04af32) is 8.0M, max 195.6M, 187.6M free. Feb 9 18:34:27.378018 systemd-journald[1023]: Received client request to flush runtime journal. Feb 9 18:34:27.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.379041 udevadm[1077]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 18:34:27.356039 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 18:34:27.359234 systemd[1]: Finished systemd-random-seed.service. Feb 9 18:34:27.360531 systemd[1]: Reached target first-boot-complete.target. Feb 9 18:34:27.362541 systemd[1]: Starting systemd-sysusers.service... Feb 9 18:34:27.365981 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:34:27.379115 systemd[1]: Finished systemd-journal-flush.service. Feb 9 18:34:27.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.386646 systemd[1]: Finished systemd-sysusers.service. Feb 9 18:34:27.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.388616 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 18:34:27.404287 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 18:34:27.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.700195 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 18:34:27.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.702374 systemd[1]: Starting systemd-udevd.service... Feb 9 18:34:27.722443 systemd-udevd[1092]: Using default interface naming scheme 'v252'. Feb 9 18:34:27.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.735012 systemd[1]: Started systemd-udevd.service. Feb 9 18:34:27.737259 systemd[1]: Starting systemd-networkd.service... Feb 9 18:34:27.744653 systemd[1]: Starting systemd-userdbd.service... Feb 9 18:34:27.761690 systemd[1]: Found device dev-ttyAMA0.device. Feb 9 18:34:27.782099 systemd[1]: Started systemd-userdbd.service. Feb 9 18:34:27.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.797896 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 18:34:27.854512 systemd-networkd[1099]: lo: Link UP Feb 9 18:34:27.854522 systemd-networkd[1099]: lo: Gained carrier Feb 9 18:34:27.854852 systemd-networkd[1099]: Enumeration completed Feb 9 18:34:27.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.854947 systemd-networkd[1099]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:34:27.854955 systemd[1]: Started systemd-networkd.service. Feb 9 18:34:27.857781 systemd-networkd[1099]: eth0: Link UP Feb 9 18:34:27.857792 systemd-networkd[1099]: eth0: Gained carrier Feb 9 18:34:27.865764 systemd[1]: Finished systemd-udev-settle.service. Feb 9 18:34:27.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.867863 systemd[1]: Starting lvm2-activation-early.service... Feb 9 18:34:27.872484 systemd-networkd[1099]: eth0: DHCPv4 address 10.0.0.89/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 18:34:27.879965 lvm[1126]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 18:34:27.912153 systemd[1]: Finished lvm2-activation-early.service. Feb 9 18:34:27.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.912959 systemd[1]: Reached target cryptsetup.target. Feb 9 18:34:27.914780 systemd[1]: Starting lvm2-activation.service... Feb 9 18:34:27.918448 lvm[1128]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 18:34:27.942378 systemd[1]: Finished lvm2-activation.service. Feb 9 18:34:27.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.943123 systemd[1]: Reached target local-fs-pre.target. Feb 9 18:34:27.943779 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 18:34:27.943808 systemd[1]: Reached target local-fs.target. Feb 9 18:34:27.944349 systemd[1]: Reached target machines.target. Feb 9 18:34:27.946185 systemd[1]: Starting ldconfig.service... Feb 9 18:34:27.947191 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 18:34:27.947245 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:34:27.948335 systemd[1]: Starting systemd-boot-update.service... Feb 9 18:34:27.950147 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 18:34:27.952296 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 18:34:27.953362 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 18:34:27.953448 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 18:34:27.954596 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 18:34:27.957420 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1131 (bootctl) Feb 9 18:34:27.958511 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 18:34:27.966434 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 18:34:27.966862 systemd-tmpfiles[1134]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 18:34:27.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:27.972646 systemd-tmpfiles[1134]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 18:34:27.973970 systemd-tmpfiles[1134]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 18:34:28.112036 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 18:34:28.112772 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 18:34:28.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:28.117161 systemd-fsck[1140]: fsck.fat 4.2 (2021-01-31) Feb 9 18:34:28.117161 systemd-fsck[1140]: /dev/vda1: 236 files, 113719/258078 clusters Feb 9 18:34:28.119168 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 18:34:28.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:28.122169 systemd[1]: Mounting boot.mount... Feb 9 18:34:28.129007 systemd[1]: Mounted boot.mount. Feb 9 18:34:28.137439 systemd[1]: Finished systemd-boot-update.service. Feb 9 18:34:28.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:28.197792 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 18:34:28.197935 ldconfig[1130]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 18:34:28.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:28.202730 systemd[1]: Starting audit-rules.service... Feb 9 18:34:28.204988 systemd[1]: Starting clean-ca-certificates.service... Feb 9 18:34:28.206948 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 18:34:28.209404 systemd[1]: Starting systemd-resolved.service... Feb 9 18:34:28.211832 systemd[1]: Starting systemd-timesyncd.service... Feb 9 18:34:28.214608 systemd[1]: Starting systemd-update-utmp.service... Feb 9 18:34:28.216074 systemd[1]: Finished ldconfig.service. Feb 9 18:34:28.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:28.217194 systemd[1]: Finished clean-ca-certificates.service. Feb 9 18:34:28.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:28.218744 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 18:34:28.220000 audit[1161]: SYSTEM_BOOT pid=1161 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 18:34:28.223232 systemd[1]: Finished systemd-update-utmp.service. Feb 9 18:34:28.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:28.224516 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 18:34:28.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:28.226847 systemd[1]: Starting systemd-update-done.service... Feb 9 18:34:28.235506 systemd[1]: Finished systemd-update-done.service. Feb 9 18:34:28.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:28.246000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 18:34:28.246000 audit[1174]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdb4d5930 a2=420 a3=0 items=0 ppid=1148 pid=1174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:28.246000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 18:34:28.246716 augenrules[1174]: No rules Feb 9 18:34:28.247371 systemd[1]: Finished audit-rules.service. Feb 9 18:34:28.274748 systemd-resolved[1156]: Positive Trust Anchors: Feb 9 18:34:28.274759 systemd-resolved[1156]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 18:34:28.274787 systemd-resolved[1156]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 18:34:28.285207 systemd[1]: Started systemd-timesyncd.service. Feb 9 18:34:28.285933 systemd-timesyncd[1160]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 9 18:34:28.286281 systemd-timesyncd[1160]: Initial clock synchronization to Fri 2024-02-09 18:34:28.036657 UTC. Feb 9 18:34:28.286475 systemd[1]: Reached target time-set.target. Feb 9 18:34:28.290656 systemd-resolved[1156]: Defaulting to hostname 'linux'. Feb 9 18:34:28.291968 systemd[1]: Started systemd-resolved.service. Feb 9 18:34:28.292828 systemd[1]: Reached target network.target. Feb 9 18:34:28.293437 systemd[1]: Reached target nss-lookup.target. Feb 9 18:34:28.294002 systemd[1]: Reached target sysinit.target. Feb 9 18:34:28.294651 systemd[1]: Started motdgen.path. Feb 9 18:34:28.295165 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 18:34:28.296136 systemd[1]: Started logrotate.timer. Feb 9 18:34:28.296939 systemd[1]: Started mdadm.timer. Feb 9 18:34:28.297638 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 18:34:28.298473 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 18:34:28.298502 systemd[1]: Reached target paths.target. Feb 9 18:34:28.299219 systemd[1]: Reached target timers.target. Feb 9 18:34:28.300279 systemd[1]: Listening on dbus.socket. Feb 9 18:34:28.302099 systemd[1]: Starting docker.socket... Feb 9 18:34:28.303721 systemd[1]: Listening on sshd.socket. Feb 9 18:34:28.304576 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:34:28.304894 systemd[1]: Listening on docker.socket. Feb 9 18:34:28.305666 systemd[1]: Reached target sockets.target. Feb 9 18:34:28.306431 systemd[1]: Reached target basic.target. Feb 9 18:34:28.307313 systemd[1]: System is tainted: cgroupsv1 Feb 9 18:34:28.307402 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 18:34:28.307427 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 18:34:28.308561 systemd[1]: Starting containerd.service... Feb 9 18:34:28.310487 systemd[1]: Starting dbus.service... Feb 9 18:34:28.312256 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 18:34:28.314433 systemd[1]: Starting extend-filesystems.service... Feb 9 18:34:28.315244 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 18:34:28.316752 systemd[1]: Starting motdgen.service... Feb 9 18:34:28.318770 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 18:34:28.321742 systemd[1]: Starting prepare-critools.service... Feb 9 18:34:28.323577 systemd[1]: Starting prepare-helm.service... Feb 9 18:34:28.325481 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 18:34:28.327540 systemd[1]: Starting sshd-keygen.service... Feb 9 18:34:28.330420 systemd[1]: Starting systemd-logind.service... Feb 9 18:34:28.331279 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:34:28.331466 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 18:34:28.332614 systemd[1]: Starting update-engine.service... Feb 9 18:34:28.335914 jq[1186]: false Feb 9 18:34:28.335144 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 18:34:28.342522 jq[1206]: true Feb 9 18:34:28.338433 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 18:34:28.338736 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 18:34:28.342061 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 18:34:28.342319 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 18:34:28.356027 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 18:34:28.356263 systemd[1]: Finished motdgen.service. Feb 9 18:34:28.359704 jq[1220]: true Feb 9 18:34:28.364573 tar[1216]: linux-arm64/helm Feb 9 18:34:28.364876 tar[1215]: crictl Feb 9 18:34:28.366682 tar[1212]: ./ Feb 9 18:34:28.366682 tar[1212]: ./macvlan Feb 9 18:34:28.384985 extend-filesystems[1187]: Found vda Feb 9 18:34:28.385924 extend-filesystems[1187]: Found vda1 Feb 9 18:34:28.385924 extend-filesystems[1187]: Found vda2 Feb 9 18:34:28.385924 extend-filesystems[1187]: Found vda3 Feb 9 18:34:28.385924 extend-filesystems[1187]: Found usr Feb 9 18:34:28.385924 extend-filesystems[1187]: Found vda4 Feb 9 18:34:28.385924 extend-filesystems[1187]: Found vda6 Feb 9 18:34:28.385924 extend-filesystems[1187]: Found vda7 Feb 9 18:34:28.385924 extend-filesystems[1187]: Found vda9 Feb 9 18:34:28.385924 extend-filesystems[1187]: Checking size of /dev/vda9 Feb 9 18:34:28.405854 dbus-daemon[1185]: [system] SELinux support is enabled Feb 9 18:34:28.406063 systemd[1]: Started dbus.service. Feb 9 18:34:28.408792 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 18:34:28.408820 systemd[1]: Reached target system-config.target. Feb 9 18:34:28.409497 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 18:34:28.409515 systemd[1]: Reached target user-config.target. Feb 9 18:34:28.423766 extend-filesystems[1187]: Resized partition /dev/vda9 Feb 9 18:34:28.453909 extend-filesystems[1252]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 18:34:28.463683 tar[1212]: ./static Feb 9 18:34:28.473377 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 9 18:34:28.475192 systemd-logind[1201]: Watching system buttons on /dev/input/event0 (Power Button) Feb 9 18:34:28.476308 bash[1241]: Updated "/home/core/.ssh/authorized_keys" Feb 9 18:34:28.476707 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 18:34:28.479445 systemd-logind[1201]: New seat seat0. Feb 9 18:34:28.497044 systemd[1]: Started systemd-logind.service. Feb 9 18:34:28.501382 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 9 18:34:28.521499 extend-filesystems[1252]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 9 18:34:28.521499 extend-filesystems[1252]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 18:34:28.521499 extend-filesystems[1252]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 9 18:34:28.523986 tar[1212]: ./vlan Feb 9 18:34:28.524492 extend-filesystems[1187]: Resized filesystem in /dev/vda9 Feb 9 18:34:28.530713 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 18:34:28.530957 systemd[1]: Finished extend-filesystems.service. Feb 9 18:34:28.548984 update_engine[1203]: I0209 18:34:28.548779 1203 main.cc:92] Flatcar Update Engine starting Feb 9 18:34:28.554344 tar[1212]: ./portmap Feb 9 18:34:28.568409 systemd[1]: Started update-engine.service. Feb 9 18:34:28.571824 update_engine[1203]: I0209 18:34:28.568444 1203 update_check_scheduler.cc:74] Next update check in 11m48s Feb 9 18:34:28.570763 systemd[1]: Started locksmithd.service. Feb 9 18:34:28.587730 tar[1212]: ./host-local Feb 9 18:34:28.590383 env[1223]: time="2024-02-09T18:34:28.589940920Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 18:34:28.614058 tar[1212]: ./vrf Feb 9 18:34:28.634293 env[1223]: time="2024-02-09T18:34:28.634250400Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 18:34:28.634445 env[1223]: time="2024-02-09T18:34:28.634424840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:34:28.638420 env[1223]: time="2024-02-09T18:34:28.638374360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:34:28.638420 env[1223]: time="2024-02-09T18:34:28.638416440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:34:28.638701 env[1223]: time="2024-02-09T18:34:28.638674720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:34:28.638701 env[1223]: time="2024-02-09T18:34:28.638699400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 18:34:28.638752 env[1223]: time="2024-02-09T18:34:28.638712640Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 18:34:28.638752 env[1223]: time="2024-02-09T18:34:28.638722320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 18:34:28.638810 env[1223]: time="2024-02-09T18:34:28.638793440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:34:28.639083 env[1223]: time="2024-02-09T18:34:28.639062920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:34:28.639242 env[1223]: time="2024-02-09T18:34:28.639221240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:34:28.639275 env[1223]: time="2024-02-09T18:34:28.639241600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 18:34:28.639310 env[1223]: time="2024-02-09T18:34:28.639293560Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 18:34:28.639344 env[1223]: time="2024-02-09T18:34:28.639310040Z" level=info msg="metadata content store policy set" policy=shared Feb 9 18:34:28.643556 tar[1212]: ./bridge Feb 9 18:34:28.651985 env[1223]: time="2024-02-09T18:34:28.651951400Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 18:34:28.652049 env[1223]: time="2024-02-09T18:34:28.651999960Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 18:34:28.652049 env[1223]: time="2024-02-09T18:34:28.652014360Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 18:34:28.652049 env[1223]: time="2024-02-09T18:34:28.652046480Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 18:34:28.652116 env[1223]: time="2024-02-09T18:34:28.652069560Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 18:34:28.652116 env[1223]: time="2024-02-09T18:34:28.652084320Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 18:34:28.652116 env[1223]: time="2024-02-09T18:34:28.652099520Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 18:34:28.652484 env[1223]: time="2024-02-09T18:34:28.652465720Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 18:34:28.652524 env[1223]: time="2024-02-09T18:34:28.652490320Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 18:34:28.652524 env[1223]: time="2024-02-09T18:34:28.652504080Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 18:34:28.652565 env[1223]: time="2024-02-09T18:34:28.652524840Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 18:34:28.652565 env[1223]: time="2024-02-09T18:34:28.652537520Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 18:34:28.652697 env[1223]: time="2024-02-09T18:34:28.652676160Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 18:34:28.652788 env[1223]: time="2024-02-09T18:34:28.652772760Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 18:34:28.653111 env[1223]: time="2024-02-09T18:34:28.653091120Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 18:34:28.653151 env[1223]: time="2024-02-09T18:34:28.653129840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 18:34:28.653151 env[1223]: time="2024-02-09T18:34:28.653145960Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 18:34:28.653268 env[1223]: time="2024-02-09T18:34:28.653252400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 18:34:28.653295 env[1223]: time="2024-02-09T18:34:28.653269240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 18:34:28.653295 env[1223]: time="2024-02-09T18:34:28.653282120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 18:34:28.653295 env[1223]: time="2024-02-09T18:34:28.653293040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 18:34:28.653362 env[1223]: time="2024-02-09T18:34:28.653304760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 18:34:28.653362 env[1223]: time="2024-02-09T18:34:28.653326320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 18:34:28.653362 env[1223]: time="2024-02-09T18:34:28.653337920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 18:34:28.653429 env[1223]: time="2024-02-09T18:34:28.653369080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 18:34:28.653429 env[1223]: time="2024-02-09T18:34:28.653383600Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 18:34:28.653563 env[1223]: time="2024-02-09T18:34:28.653542920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 18:34:28.653592 env[1223]: time="2024-02-09T18:34:28.653566960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 18:34:28.653592 env[1223]: time="2024-02-09T18:34:28.653580680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 18:34:28.653629 env[1223]: time="2024-02-09T18:34:28.653600440Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 18:34:28.653629 env[1223]: time="2024-02-09T18:34:28.653616240Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 18:34:28.653629 env[1223]: time="2024-02-09T18:34:28.653627240Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 18:34:28.653724 env[1223]: time="2024-02-09T18:34:28.653643480Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 18:34:28.653724 env[1223]: time="2024-02-09T18:34:28.653683440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 18:34:28.653956 env[1223]: time="2024-02-09T18:34:28.653904360Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 18:34:28.657369 env[1223]: time="2024-02-09T18:34:28.653962400Z" level=info msg="Connect containerd service" Feb 9 18:34:28.657369 env[1223]: time="2024-02-09T18:34:28.654002640Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 18:34:28.657369 env[1223]: time="2024-02-09T18:34:28.654658960Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 18:34:28.657369 env[1223]: time="2024-02-09T18:34:28.655061360Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 18:34:28.657369 env[1223]: time="2024-02-09T18:34:28.655112640Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 18:34:28.657369 env[1223]: time="2024-02-09T18:34:28.655162880Z" level=info msg="containerd successfully booted in 0.067135s" Feb 9 18:34:28.657369 env[1223]: time="2024-02-09T18:34:28.656849560Z" level=info msg="Start subscribing containerd event" Feb 9 18:34:28.657369 env[1223]: time="2024-02-09T18:34:28.656899240Z" level=info msg="Start recovering state" Feb 9 18:34:28.657369 env[1223]: time="2024-02-09T18:34:28.656959200Z" level=info msg="Start event monitor" Feb 9 18:34:28.657369 env[1223]: time="2024-02-09T18:34:28.656976000Z" level=info msg="Start snapshots syncer" Feb 9 18:34:28.657369 env[1223]: time="2024-02-09T18:34:28.656987680Z" level=info msg="Start cni network conf syncer for default" Feb 9 18:34:28.657369 env[1223]: time="2024-02-09T18:34:28.656996280Z" level=info msg="Start streaming server" Feb 9 18:34:28.656491 systemd[1]: Started containerd.service. Feb 9 18:34:28.678767 tar[1212]: ./tuning Feb 9 18:34:28.706426 tar[1212]: ./firewall Feb 9 18:34:28.741116 tar[1212]: ./host-device Feb 9 18:34:28.771971 tar[1212]: ./sbr Feb 9 18:34:28.799789 tar[1212]: ./loopback Feb 9 18:34:28.818028 systemd[1]: Finished prepare-critools.service. Feb 9 18:34:28.827525 tar[1212]: ./dhcp Feb 9 18:34:28.903004 tar[1212]: ./ptp Feb 9 18:34:28.909696 tar[1216]: linux-arm64/LICENSE Feb 9 18:34:28.909772 tar[1216]: linux-arm64/README.md Feb 9 18:34:28.914840 locksmithd[1258]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 18:34:28.915611 systemd[1]: Finished prepare-helm.service. Feb 9 18:34:28.936206 tar[1212]: ./ipvlan Feb 9 18:34:28.967986 tar[1212]: ./bandwidth Feb 9 18:34:29.009765 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 18:34:29.163505 systemd-networkd[1099]: eth0: Gained IPv6LL Feb 9 18:34:29.647005 sshd_keygen[1218]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 18:34:29.663451 systemd[1]: Finished sshd-keygen.service. Feb 9 18:34:29.665697 systemd[1]: Starting issuegen.service... Feb 9 18:34:29.669772 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 18:34:29.669966 systemd[1]: Finished issuegen.service. Feb 9 18:34:29.672093 systemd[1]: Starting systemd-user-sessions.service... Feb 9 18:34:29.677343 systemd[1]: Finished systemd-user-sessions.service. Feb 9 18:34:29.679398 systemd[1]: Started getty@tty1.service. Feb 9 18:34:29.681024 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 9 18:34:29.682028 systemd[1]: Reached target getty.target. Feb 9 18:34:29.682687 systemd[1]: Reached target multi-user.target. Feb 9 18:34:29.684376 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 18:34:29.689854 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 18:34:29.690037 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 18:34:29.691062 systemd[1]: Startup finished in 11.057s (kernel) + 4.605s (userspace) = 15.662s. Feb 9 18:34:37.867965 systemd[1]: Created slice system-sshd.slice. Feb 9 18:34:37.869457 systemd[1]: Started sshd@0-10.0.0.89:22-10.0.0.1:58962.service. Feb 9 18:34:37.917718 sshd[1297]: Accepted publickey for core from 10.0.0.1 port 58962 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:34:37.920022 sshd[1297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:34:37.930549 systemd-logind[1201]: New session 1 of user core. Feb 9 18:34:37.931415 systemd[1]: Created slice user-500.slice. Feb 9 18:34:37.932391 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 18:34:37.940229 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 18:34:37.941413 systemd[1]: Starting user@500.service... Feb 9 18:34:37.944065 (systemd)[1302]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:34:37.999367 systemd[1302]: Queued start job for default target default.target. Feb 9 18:34:37.999555 systemd[1302]: Reached target paths.target. Feb 9 18:34:37.999569 systemd[1302]: Reached target sockets.target. Feb 9 18:34:37.999580 systemd[1302]: Reached target timers.target. Feb 9 18:34:37.999610 systemd[1302]: Reached target basic.target. Feb 9 18:34:37.999650 systemd[1302]: Reached target default.target. Feb 9 18:34:37.999673 systemd[1302]: Startup finished in 50ms. Feb 9 18:34:37.999878 systemd[1]: Started user@500.service. Feb 9 18:34:38.000805 systemd[1]: Started session-1.scope. Feb 9 18:34:38.048869 systemd[1]: Started sshd@1-10.0.0.89:22-10.0.0.1:58972.service. Feb 9 18:34:38.090437 sshd[1311]: Accepted publickey for core from 10.0.0.1 port 58972 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:34:38.091542 sshd[1311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:34:38.095626 systemd[1]: Started session-2.scope. Feb 9 18:34:38.095809 systemd-logind[1201]: New session 2 of user core. Feb 9 18:34:38.148540 sshd[1311]: pam_unix(sshd:session): session closed for user core Feb 9 18:34:38.150695 systemd[1]: Started sshd@2-10.0.0.89:22-10.0.0.1:58984.service. Feb 9 18:34:38.151163 systemd[1]: sshd@1-10.0.0.89:22-10.0.0.1:58972.service: Deactivated successfully. Feb 9 18:34:38.152612 systemd-logind[1201]: Session 2 logged out. Waiting for processes to exit. Feb 9 18:34:38.152662 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 18:34:38.153986 systemd-logind[1201]: Removed session 2. Feb 9 18:34:38.190698 sshd[1316]: Accepted publickey for core from 10.0.0.1 port 58984 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:34:38.191719 sshd[1316]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:34:38.194651 systemd-logind[1201]: New session 3 of user core. Feb 9 18:34:38.195421 systemd[1]: Started session-3.scope. Feb 9 18:34:38.244088 sshd[1316]: pam_unix(sshd:session): session closed for user core Feb 9 18:34:38.246128 systemd[1]: Started sshd@3-10.0.0.89:22-10.0.0.1:58994.service. Feb 9 18:34:38.246635 systemd[1]: sshd@2-10.0.0.89:22-10.0.0.1:58984.service: Deactivated successfully. Feb 9 18:34:38.248623 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 18:34:38.249018 systemd-logind[1201]: Session 3 logged out. Waiting for processes to exit. Feb 9 18:34:38.249929 systemd-logind[1201]: Removed session 3. Feb 9 18:34:38.287878 sshd[1323]: Accepted publickey for core from 10.0.0.1 port 58994 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:34:38.288912 sshd[1323]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:34:38.292560 systemd[1]: Started session-4.scope. Feb 9 18:34:38.293182 systemd-logind[1201]: New session 4 of user core. Feb 9 18:34:38.344703 sshd[1323]: pam_unix(sshd:session): session closed for user core Feb 9 18:34:38.346567 systemd[1]: Started sshd@4-10.0.0.89:22-10.0.0.1:58996.service. Feb 9 18:34:38.347522 systemd[1]: sshd@3-10.0.0.89:22-10.0.0.1:58994.service: Deactivated successfully. Feb 9 18:34:38.348481 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 18:34:38.348840 systemd-logind[1201]: Session 4 logged out. Waiting for processes to exit. Feb 9 18:34:38.349517 systemd-logind[1201]: Removed session 4. Feb 9 18:34:38.387200 sshd[1330]: Accepted publickey for core from 10.0.0.1 port 58996 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:34:38.388237 sshd[1330]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:34:38.392271 systemd-logind[1201]: New session 5 of user core. Feb 9 18:34:38.392843 systemd[1]: Started session-5.scope. Feb 9 18:34:38.448244 sudo[1336]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 9 18:34:38.448469 sudo[1336]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 18:34:38.461584 dbus-daemon[1185]: avc: received setenforce notice (enforcing=1) Feb 9 18:34:38.463227 sudo[1336]: pam_unix(sudo:session): session closed for user root Feb 9 18:34:38.464948 sshd[1330]: pam_unix(sshd:session): session closed for user core Feb 9 18:34:38.467213 systemd[1]: Started sshd@5-10.0.0.89:22-10.0.0.1:59006.service. Feb 9 18:34:38.467837 systemd[1]: sshd@4-10.0.0.89:22-10.0.0.1:58996.service: Deactivated successfully. Feb 9 18:34:38.468851 systemd-logind[1201]: Session 5 logged out. Waiting for processes to exit. Feb 9 18:34:38.468913 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 18:34:38.469869 systemd-logind[1201]: Removed session 5. Feb 9 18:34:38.507371 sshd[1338]: Accepted publickey for core from 10.0.0.1 port 59006 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:34:38.508466 sshd[1338]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:34:38.511779 systemd-logind[1201]: New session 6 of user core. Feb 9 18:34:38.512182 systemd[1]: Started session-6.scope. Feb 9 18:34:38.563313 sudo[1345]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 9 18:34:38.563785 sudo[1345]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 18:34:38.566370 sudo[1345]: pam_unix(sudo:session): session closed for user root Feb 9 18:34:38.570210 sudo[1344]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 9 18:34:38.570419 sudo[1344]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 18:34:38.577763 systemd[1]: Stopping audit-rules.service... Feb 9 18:34:38.578000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 9 18:34:38.578922 auditctl[1348]: No rules Feb 9 18:34:38.579136 systemd[1]: audit-rules.service: Deactivated successfully. Feb 9 18:34:38.579328 systemd[1]: Stopped audit-rules.service. Feb 9 18:34:38.580861 kernel: kauditd_printk_skb: 95 callbacks suppressed Feb 9 18:34:38.580924 kernel: audit: type=1305 audit(1707503678.578:128): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 9 18:34:38.580946 kernel: audit: type=1300 audit(1707503678.578:128): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffded94730 a2=420 a3=0 items=0 ppid=1 pid=1348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:38.578000 audit[1348]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffded94730 a2=420 a3=0 items=0 ppid=1 pid=1348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:38.580701 systemd[1]: Starting audit-rules.service... Feb 9 18:34:38.578000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Feb 9 18:34:38.583677 kernel: audit: type=1327 audit(1707503678.578:128): proctitle=2F7362696E2F617564697463746C002D44 Feb 9 18:34:38.583715 kernel: audit: type=1131 audit(1707503678.578:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:38.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:38.594351 augenrules[1366]: No rules Feb 9 18:34:38.594999 systemd[1]: Finished audit-rules.service. Feb 9 18:34:38.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:38.597571 sudo[1344]: pam_unix(sudo:session): session closed for user root Feb 9 18:34:38.597000 audit[1344]: USER_END pid=1344 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 18:34:38.600383 kernel: audit: type=1130 audit(1707503678.594:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:38.600501 kernel: audit: type=1106 audit(1707503678.597:131): pid=1344 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 18:34:38.600530 kernel: audit: type=1104 audit(1707503678.597:132): pid=1344 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 18:34:38.597000 audit[1344]: CRED_DISP pid=1344 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 18:34:38.602780 sshd[1338]: pam_unix(sshd:session): session closed for user core Feb 9 18:34:38.603000 audit[1338]: USER_END pid=1338 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:34:38.605232 systemd[1]: Started sshd@6-10.0.0.89:22-10.0.0.1:59020.service. Feb 9 18:34:38.605675 systemd[1]: sshd@5-10.0.0.89:22-10.0.0.1:59006.service: Deactivated successfully. Feb 9 18:34:38.603000 audit[1338]: CRED_DISP pid=1338 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:34:38.606720 systemd-logind[1201]: Session 6 logged out. Waiting for processes to exit. Feb 9 18:34:38.606763 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 18:34:38.607776 systemd-logind[1201]: Removed session 6. Feb 9 18:34:38.608115 kernel: audit: type=1106 audit(1707503678.603:133): pid=1338 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:34:38.608163 kernel: audit: type=1104 audit(1707503678.603:134): pid=1338 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:34:38.608179 kernel: audit: type=1130 audit(1707503678.603:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.89:22-10.0.0.1:59020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:38.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.89:22-10.0.0.1:59020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:38.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.89:22-10.0.0.1:59006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:38.645000 audit[1372]: USER_ACCT pid=1372 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:34:38.646856 sshd[1372]: Accepted publickey for core from 10.0.0.1 port 59020 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:34:38.646000 audit[1372]: CRED_ACQ pid=1372 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:34:38.646000 audit[1372]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffc017f50 a2=3 a3=1 items=0 ppid=1 pid=1372 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:38.646000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 18:34:38.648135 sshd[1372]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:34:38.650963 systemd-logind[1201]: New session 7 of user core. Feb 9 18:34:38.651712 systemd[1]: Started session-7.scope. Feb 9 18:34:38.653000 audit[1372]: USER_START pid=1372 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:34:38.654000 audit[1376]: CRED_ACQ pid=1376 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:34:38.700000 audit[1377]: USER_ACCT pid=1377 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 18:34:38.701000 audit[1377]: CRED_REFR pid=1377 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 18:34:38.701368 sudo[1377]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 18:34:38.701572 sudo[1377]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 18:34:38.703000 audit[1377]: USER_START pid=1377 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 18:34:39.254118 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 18:34:39.261410 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 18:34:39.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:39.262130 systemd[1]: Reached target network-online.target. Feb 9 18:34:39.263650 systemd[1]: Starting docker.service... Feb 9 18:34:39.344185 env[1396]: time="2024-02-09T18:34:39.344134478Z" level=info msg="Starting up" Feb 9 18:34:39.345723 env[1396]: time="2024-02-09T18:34:39.345696412Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 18:34:39.345818 env[1396]: time="2024-02-09T18:34:39.345803209Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 18:34:39.345883 env[1396]: time="2024-02-09T18:34:39.345868153Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc Feb 9 18:34:39.345931 env[1396]: time="2024-02-09T18:34:39.345919607Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 18:34:39.347826 env[1396]: time="2024-02-09T18:34:39.347799673Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 18:34:39.347826 env[1396]: time="2024-02-09T18:34:39.347819112Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 18:34:39.347902 env[1396]: time="2024-02-09T18:34:39.347834783Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc Feb 9 18:34:39.347902 env[1396]: time="2024-02-09T18:34:39.347843233Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 18:34:39.550574 env[1396]: time="2024-02-09T18:34:39.550485351Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 9 18:34:39.550574 env[1396]: time="2024-02-09T18:34:39.550513399Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 9 18:34:39.550759 env[1396]: time="2024-02-09T18:34:39.550748298Z" level=info msg="Loading containers: start." Feb 9 18:34:39.587000 audit[1430]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1430 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:39.587000 audit[1430]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffe9cb1e80 a2=0 a3=1 items=0 ppid=1396 pid=1430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:39.587000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Feb 9 18:34:39.588000 audit[1432]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1432 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:39.588000 audit[1432]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffd0157d90 a2=0 a3=1 items=0 ppid=1396 pid=1432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:39.588000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Feb 9 18:34:39.590000 audit[1434]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1434 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:39.590000 audit[1434]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc467ff50 a2=0 a3=1 items=0 ppid=1396 pid=1434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:39.590000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Feb 9 18:34:39.591000 audit[1436]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1436 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:39.591000 audit[1436]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff3ec4d70 a2=0 a3=1 items=0 ppid=1396 pid=1436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:39.591000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Feb 9 18:34:39.594000 audit[1438]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1438 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:39.594000 audit[1438]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc04d2630 a2=0 a3=1 items=0 ppid=1396 pid=1438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:39.594000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Feb 9 18:34:39.617000 audit[1443]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1443 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:39.617000 audit[1443]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe989a5b0 a2=0 a3=1 items=0 ppid=1396 pid=1443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:39.617000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Feb 9 18:34:39.625000 audit[1445]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1445 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:39.625000 audit[1445]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff359b2c0 a2=0 a3=1 items=0 ppid=1396 pid=1445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:39.625000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Feb 9 18:34:39.626000 audit[1447]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1447 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:39.626000 audit[1447]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=fffffd8ee540 a2=0 a3=1 items=0 ppid=1396 pid=1447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:39.626000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Feb 9 18:34:39.628000 audit[1449]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1449 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:39.628000 audit[1449]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=ffffdf77ccc0 a2=0 a3=1 items=0 ppid=1396 pid=1449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:39.628000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 9 18:34:39.633000 audit[1453]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1453 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:39.633000 audit[1453]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffca70ad20 a2=0 a3=1 items=0 ppid=1396 pid=1453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:39.633000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Feb 9 18:34:39.634000 audit[1454]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1454 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:39.634000 audit[1454]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffce752e80 a2=0 a3=1 items=0 ppid=1396 pid=1454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:39.634000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 9 18:34:39.640368 kernel: Initializing XFRM netlink socket Feb 9 18:34:39.662659 env[1396]: time="2024-02-09T18:34:39.662619166Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 18:34:39.674000 audit[1462]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1462 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:39.674000 audit[1462]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=ffffcef25a40 a2=0 a3=1 items=0 ppid=1396 pid=1462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:39.674000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Feb 9 18:34:39.688000 audit[1465]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1465 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:39.688000 audit[1465]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffd5623290 a2=0 a3=1 items=0 ppid=1396 pid=1465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:39.688000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Feb 9 18:34:39.691000 audit[1468]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1468 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:39.691000 audit[1468]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffddf0e000 a2=0 a3=1 items=0 ppid=1396 pid=1468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:39.691000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Feb 9 18:34:39.692000 audit[1470]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1470 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:39.692000 audit[1470]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffd6c6d790 a2=0 a3=1 items=0 ppid=1396 pid=1470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:39.692000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Feb 9 18:34:39.694000 audit[1472]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1472 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:39.694000 audit[1472]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=ffffe26fec20 a2=0 a3=1 items=0 ppid=1396 pid=1472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:39.694000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Feb 9 18:34:39.696000 audit[1474]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1474 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:39.696000 audit[1474]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=ffffe4848cf0 a2=0 a3=1 items=0 ppid=1396 pid=1474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:39.696000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Feb 9 18:34:39.697000 audit[1476]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1476 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:39.697000 audit[1476]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=fffff5377960 a2=0 a3=1 items=0 ppid=1396 pid=1476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:39.697000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Feb 9 18:34:39.703000 audit[1479]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1479 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:39.703000 audit[1479]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=fffff932dee0 a2=0 a3=1 items=0 ppid=1396 pid=1479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:39.703000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Feb 9 18:34:39.705000 audit[1481]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1481 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:39.705000 audit[1481]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=fffffb81df10 a2=0 a3=1 items=0 ppid=1396 pid=1481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:39.705000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Feb 9 18:34:39.707000 audit[1483]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1483 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:39.707000 audit[1483]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffecd57580 a2=0 a3=1 items=0 ppid=1396 pid=1483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:39.707000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Feb 9 18:34:39.708000 audit[1485]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1485 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:39.708000 audit[1485]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffeee76510 a2=0 a3=1 items=0 ppid=1396 pid=1485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:39.708000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Feb 9 18:34:39.710422 systemd-networkd[1099]: docker0: Link UP Feb 9 18:34:39.716000 audit[1489]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1489 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:39.716000 audit[1489]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd90a0f20 a2=0 a3=1 items=0 ppid=1396 pid=1489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:39.716000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Feb 9 18:34:39.717000 audit[1490]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1490 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:39.717000 audit[1490]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffe9ce7330 a2=0 a3=1 items=0 ppid=1396 pid=1490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:39.717000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 9 18:34:39.718136 env[1396]: time="2024-02-09T18:34:39.718109894Z" level=info msg="Loading containers: done." Feb 9 18:34:39.739012 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1817088377-merged.mount: Deactivated successfully. Feb 9 18:34:39.745299 env[1396]: time="2024-02-09T18:34:39.745253213Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 18:34:39.745467 env[1396]: time="2024-02-09T18:34:39.745430230Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 18:34:39.745555 env[1396]: time="2024-02-09T18:34:39.745532862Z" level=info msg="Daemon has completed initialization" Feb 9 18:34:39.758126 systemd[1]: Started docker.service. Feb 9 18:34:39.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:39.764249 env[1396]: time="2024-02-09T18:34:39.764211483Z" level=info msg="API listen on /run/docker.sock" Feb 9 18:34:39.779895 systemd[1]: Reloading. Feb 9 18:34:39.824553 /usr/lib/systemd/system-generators/torcx-generator[1539]: time="2024-02-09T18:34:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:34:39.824589 /usr/lib/systemd/system-generators/torcx-generator[1539]: time="2024-02-09T18:34:39Z" level=info msg="torcx already run" Feb 9 18:34:39.881861 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:34:39.881880 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:34:39.899027 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:34:39.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:39.963605 systemd[1]: Started kubelet.service. Feb 9 18:34:40.117679 kubelet[1582]: E0209 18:34:40.117554 1582 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 18:34:40.120037 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 18:34:40.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 18:34:40.120241 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 18:34:40.303441 env[1223]: time="2024-02-09T18:34:40.303196580Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 18:34:40.939337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1101933782.mount: Deactivated successfully. Feb 9 18:34:42.470765 env[1223]: time="2024-02-09T18:34:42.470717535Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:42.473061 env[1223]: time="2024-02-09T18:34:42.473022969Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:42.475277 env[1223]: time="2024-02-09T18:34:42.475251546Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:42.480255 env[1223]: time="2024-02-09T18:34:42.480200506Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:42.480608 env[1223]: time="2024-02-09T18:34:42.480561194Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88\"" Feb 9 18:34:42.490606 env[1223]: time="2024-02-09T18:34:42.490576824Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 18:34:44.145018 env[1223]: time="2024-02-09T18:34:44.144969645Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:44.146616 env[1223]: time="2024-02-09T18:34:44.146579330Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:44.148226 env[1223]: time="2024-02-09T18:34:44.148199531Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:44.149983 env[1223]: time="2024-02-09T18:34:44.149943689Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:44.150849 env[1223]: time="2024-02-09T18:34:44.150812063Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2\"" Feb 9 18:34:44.159809 env[1223]: time="2024-02-09T18:34:44.159775508Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 18:34:45.321920 env[1223]: time="2024-02-09T18:34:45.321870248Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:45.325555 env[1223]: time="2024-02-09T18:34:45.325515153Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:45.327282 env[1223]: time="2024-02-09T18:34:45.327252818Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:45.329224 env[1223]: time="2024-02-09T18:34:45.329196762Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:45.329979 env[1223]: time="2024-02-09T18:34:45.329941453Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a\"" Feb 9 18:34:45.338608 env[1223]: time="2024-02-09T18:34:45.338583112Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 18:34:46.443731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4154321158.mount: Deactivated successfully. Feb 9 18:34:46.773102 env[1223]: time="2024-02-09T18:34:46.772388314Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:46.778117 env[1223]: time="2024-02-09T18:34:46.778079181Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:46.779491 env[1223]: time="2024-02-09T18:34:46.779464468Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:46.780952 env[1223]: time="2024-02-09T18:34:46.780887713Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:46.781214 env[1223]: time="2024-02-09T18:34:46.781187346Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926\"" Feb 9 18:34:46.795069 env[1223]: time="2024-02-09T18:34:46.795042294Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 18:34:47.261509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2295818923.mount: Deactivated successfully. Feb 9 18:34:47.268677 env[1223]: time="2024-02-09T18:34:47.268646281Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:47.270416 env[1223]: time="2024-02-09T18:34:47.270390197Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:47.272059 env[1223]: time="2024-02-09T18:34:47.272024063Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:47.273620 env[1223]: time="2024-02-09T18:34:47.273595824Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:47.274057 env[1223]: time="2024-02-09T18:34:47.274038175Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 9 18:34:47.282461 env[1223]: time="2024-02-09T18:34:47.282382293Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 18:34:48.059024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4130366140.mount: Deactivated successfully. Feb 9 18:34:49.919254 env[1223]: time="2024-02-09T18:34:49.919209585Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:49.920688 env[1223]: time="2024-02-09T18:34:49.920662766Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:49.923002 env[1223]: time="2024-02-09T18:34:49.922969342Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:49.924300 env[1223]: time="2024-02-09T18:34:49.924275520Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:49.925665 env[1223]: time="2024-02-09T18:34:49.925633985Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb\"" Feb 9 18:34:49.934121 env[1223]: time="2024-02-09T18:34:49.934094944Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 18:34:50.371185 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 18:34:50.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:50.371387 systemd[1]: Stopped kubelet.service. Feb 9 18:34:50.372871 systemd[1]: Started kubelet.service. Feb 9 18:34:50.373790 kernel: kauditd_printk_skb: 87 callbacks suppressed Feb 9 18:34:50.373841 kernel: audit: type=1130 audit(1707503690.370:173): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:50.373864 kernel: audit: type=1131 audit(1707503690.370:174): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:50.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:50.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:50.377377 kernel: audit: type=1130 audit(1707503690.371:175): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:50.410842 kubelet[1643]: E0209 18:34:50.410787 1643 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 18:34:50.413970 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 18:34:50.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 18:34:50.414126 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 18:34:50.416381 kernel: audit: type=1131 audit(1707503690.413:176): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 18:34:50.581431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3023827902.mount: Deactivated successfully. Feb 9 18:34:51.178124 env[1223]: time="2024-02-09T18:34:51.178042837Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:51.180071 env[1223]: time="2024-02-09T18:34:51.180024920Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:51.181976 env[1223]: time="2024-02-09T18:34:51.181945025Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:51.183417 env[1223]: time="2024-02-09T18:34:51.183386482Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:51.183881 env[1223]: time="2024-02-09T18:34:51.183850874Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0\"" Feb 9 18:34:55.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:55.420496 systemd[1]: Stopped kubelet.service. Feb 9 18:34:55.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:55.424270 kernel: audit: type=1130 audit(1707503695.419:177): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:55.424339 kernel: audit: type=1131 audit(1707503695.419:178): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:55.434141 systemd[1]: Reloading. Feb 9 18:34:55.478464 /usr/lib/systemd/system-generators/torcx-generator[1745]: time="2024-02-09T18:34:55Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:34:55.478821 /usr/lib/systemd/system-generators/torcx-generator[1745]: time="2024-02-09T18:34:55Z" level=info msg="torcx already run" Feb 9 18:34:55.542294 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:34:55.542314 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:34:55.559916 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:34:55.630420 systemd[1]: Started kubelet.service. Feb 9 18:34:55.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:55.633540 kernel: audit: type=1130 audit(1707503695.629:179): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:55.667710 kubelet[1789]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 18:34:55.667710 kubelet[1789]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:34:55.668041 kubelet[1789]: I0209 18:34:55.667820 1789 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 18:34:55.668939 kubelet[1789]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 18:34:55.668939 kubelet[1789]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:34:56.555079 kubelet[1789]: I0209 18:34:56.555051 1789 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 18:34:56.555079 kubelet[1789]: I0209 18:34:56.555077 1789 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 18:34:56.555286 kubelet[1789]: I0209 18:34:56.555272 1789 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 18:34:56.559731 kubelet[1789]: I0209 18:34:56.559714 1789 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 18:34:56.560011 kubelet[1789]: E0209 18:34:56.559988 1789 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.89:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.89:6443: connect: connection refused Feb 9 18:34:56.561618 kubelet[1789]: W0209 18:34:56.561597 1789 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 18:34:56.562389 kubelet[1789]: I0209 18:34:56.562367 1789 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 18:34:56.563002 kubelet[1789]: I0209 18:34:56.562979 1789 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 18:34:56.563077 kubelet[1789]: I0209 18:34:56.563054 1789 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 18:34:56.563158 kubelet[1789]: I0209 18:34:56.563080 1789 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 18:34:56.563158 kubelet[1789]: I0209 18:34:56.563092 1789 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 18:34:56.563258 kubelet[1789]: I0209 18:34:56.563236 1789 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:34:56.568283 kubelet[1789]: I0209 18:34:56.568262 1789 kubelet.go:398] "Attempting to sync node with API server" Feb 9 18:34:56.568283 kubelet[1789]: I0209 18:34:56.568288 1789 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 18:34:56.568456 kubelet[1789]: I0209 18:34:56.568446 1789 kubelet.go:297] "Adding apiserver pod source" Feb 9 18:34:56.568485 kubelet[1789]: I0209 18:34:56.568460 1789 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 18:34:56.569557 kubelet[1789]: W0209 18:34:56.569506 1789 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Feb 9 18:34:56.569634 kubelet[1789]: E0209 18:34:56.569564 1789 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Feb 9 18:34:56.569634 kubelet[1789]: W0209 18:34:56.569550 1789 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.89:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Feb 9 18:34:56.569634 kubelet[1789]: E0209 18:34:56.569589 1789 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.89:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Feb 9 18:34:56.569716 kubelet[1789]: I0209 18:34:56.569690 1789 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 18:34:56.570528 kubelet[1789]: W0209 18:34:56.570502 1789 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 18:34:56.570986 kubelet[1789]: I0209 18:34:56.570960 1789 server.go:1186] "Started kubelet" Feb 9 18:34:56.571295 kubelet[1789]: I0209 18:34:56.571268 1789 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 18:34:56.571950 kubelet[1789]: I0209 18:34:56.571868 1789 server.go:451] "Adding debug handlers to kubelet server" Feb 9 18:34:56.572237 kubelet[1789]: E0209 18:34:56.572119 1789 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b24590b6c6f506", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 34, 56, 570938630, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 34, 56, 570938630, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.89:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.89:6443: connect: connection refused'(may retry after sleeping) Feb 9 18:34:56.573002 kubelet[1789]: E0209 18:34:56.572964 1789 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 18:34:56.573002 kubelet[1789]: E0209 18:34:56.573004 1789 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 18:34:56.572000 audit[1789]: AVC avc: denied { mac_admin } for pid=1789 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:56.573620 kubelet[1789]: I0209 18:34:56.573532 1789 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 9 18:34:56.573620 kubelet[1789]: I0209 18:34:56.573562 1789 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 9 18:34:56.573620 kubelet[1789]: I0209 18:34:56.573618 1789 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 18:34:56.572000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 18:34:56.575949 kubelet[1789]: E0209 18:34:56.575937 1789 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 18:34:56.576146 kubelet[1789]: I0209 18:34:56.576136 1789 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 18:34:56.576231 kernel: audit: type=1400 audit(1707503696.572:180): avc: denied { mac_admin } for pid=1789 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:56.576269 kernel: audit: type=1401 audit(1707503696.572:180): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 18:34:56.576299 kernel: audit: type=1300 audit(1707503696.572:180): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000b9e1b0 a1=400114e858 a2=4000b9e180 a3=25 items=0 ppid=1 pid=1789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.572000 audit[1789]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000b9e1b0 a1=400114e858 a2=4000b9e180 a3=25 items=0 ppid=1 pid=1789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.576491 kubelet[1789]: I0209 18:34:56.576468 1789 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 18:34:56.576811 kubelet[1789]: E0209 18:34:56.576777 1789 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.89:6443: connect: connection refused Feb 9 18:34:56.577204 kubelet[1789]: W0209 18:34:56.577166 1789 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Feb 9 18:34:56.577273 kubelet[1789]: E0209 18:34:56.577210 1789 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Feb 9 18:34:56.572000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 18:34:56.581096 kernel: audit: type=1327 audit(1707503696.572:180): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 18:34:56.581147 kernel: audit: type=1400 audit(1707503696.572:181): avc: denied { mac_admin } for pid=1789 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:56.572000 audit[1789]: AVC avc: denied { mac_admin } for pid=1789 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:56.582802 kernel: audit: type=1401 audit(1707503696.572:181): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 18:34:56.572000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 18:34:56.583683 kernel: audit: type=1300 audit(1707503696.572:181): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000bbe3c0 a1=400114e870 a2=4000b9e240 a3=25 items=0 ppid=1 pid=1789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.572000 audit[1789]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000bbe3c0 a1=400114e870 a2=4000b9e240 a3=25 items=0 ppid=1 pid=1789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.572000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 18:34:56.581000 audit[1801]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1801 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:56.581000 audit[1801]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffffac04c30 a2=0 a3=1 items=0 ppid=1789 pid=1801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.581000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 9 18:34:56.583000 audit[1804]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1804 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:56.583000 audit[1804]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffde6bd4f0 a2=0 a3=1 items=0 ppid=1789 pid=1804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.583000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 9 18:34:56.585000 audit[1806]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1806 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:56.585000 audit[1806]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe8d65d90 a2=0 a3=1 items=0 ppid=1789 pid=1806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.585000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 9 18:34:56.587000 audit[1808]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1808 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:56.587000 audit[1808]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffecd83180 a2=0 a3=1 items=0 ppid=1789 pid=1808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.587000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 9 18:34:56.600000 audit[1813]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1813 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:56.600000 audit[1813]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffd6f454d0 a2=0 a3=1 items=0 ppid=1789 pid=1813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.600000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Feb 9 18:34:56.601000 audit[1814]: NETFILTER_CFG table=nat:31 family=2 entries=1 op=nft_register_chain pid=1814 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:56.601000 audit[1814]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffc387a40 a2=0 a3=1 items=0 ppid=1789 pid=1814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.601000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 9 18:34:56.606944 kubelet[1789]: I0209 18:34:56.606923 1789 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 18:34:56.607092 kubelet[1789]: I0209 18:34:56.607080 1789 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 18:34:56.607159 kubelet[1789]: I0209 18:34:56.607150 1789 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:34:56.608787 kubelet[1789]: I0209 18:34:56.608768 1789 policy_none.go:49] "None policy: Start" Feb 9 18:34:56.607000 audit[1819]: NETFILTER_CFG table=nat:32 family=2 entries=1 op=nft_register_rule pid=1819 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:56.607000 audit[1819]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffe35585e0 a2=0 a3=1 items=0 ppid=1789 pid=1819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.609528 kubelet[1789]: I0209 18:34:56.609421 1789 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 18:34:56.609528 kubelet[1789]: I0209 18:34:56.609444 1789 state_mem.go:35] "Initializing new in-memory state store" Feb 9 18:34:56.607000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 9 18:34:56.613531 kubelet[1789]: I0209 18:34:56.613506 1789 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 18:34:56.612000 audit[1789]: AVC avc: denied { mac_admin } for pid=1789 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:56.612000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 18:34:56.612000 audit[1789]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40011d92f0 a1=40011a57b8 a2=40011d92c0 a3=25 items=0 ppid=1 pid=1789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.612000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 18:34:56.613757 kubelet[1789]: I0209 18:34:56.613567 1789 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 9 18:34:56.613757 kubelet[1789]: I0209 18:34:56.613710 1789 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 18:34:56.614969 kubelet[1789]: E0209 18:34:56.614934 1789 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 9 18:34:56.616000 audit[1822]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=1822 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:56.616000 audit[1822]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=ffffe7c4e820 a2=0 a3=1 items=0 ppid=1789 pid=1822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.616000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 9 18:34:56.616000 audit[1823]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1823 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:56.616000 audit[1823]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc5f5ab60 a2=0 a3=1 items=0 ppid=1789 pid=1823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.616000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 9 18:34:56.618000 audit[1824]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_chain pid=1824 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:56.618000 audit[1824]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe1d3a8d0 a2=0 a3=1 items=0 ppid=1789 pid=1824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.618000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 9 18:34:56.619000 audit[1826]: NETFILTER_CFG table=nat:36 family=2 entries=1 op=nft_register_rule pid=1826 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:56.619000 audit[1826]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffc46a2be0 a2=0 a3=1 items=0 ppid=1789 pid=1826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.619000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 9 18:34:56.621000 audit[1828]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_rule pid=1828 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:56.621000 audit[1828]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=fffff3c329f0 a2=0 a3=1 items=0 ppid=1789 pid=1828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.621000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 9 18:34:56.623000 audit[1830]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=1830 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:56.623000 audit[1830]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=ffffce9638b0 a2=0 a3=1 items=0 ppid=1789 pid=1830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.623000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 9 18:34:56.625000 audit[1832]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_rule pid=1832 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:56.625000 audit[1832]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=ffffd1d0fca0 a2=0 a3=1 items=0 ppid=1789 pid=1832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.625000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 9 18:34:56.627000 audit[1834]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_rule pid=1834 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:56.627000 audit[1834]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=540 a0=3 a1=ffffd7516d90 a2=0 a3=1 items=0 ppid=1789 pid=1834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.627000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 9 18:34:56.629421 kubelet[1789]: I0209 18:34:56.629390 1789 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 18:34:56.628000 audit[1835]: NETFILTER_CFG table=mangle:41 family=10 entries=2 op=nft_register_chain pid=1835 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:34:56.628000 audit[1835]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffffc52a930 a2=0 a3=1 items=0 ppid=1789 pid=1835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.628000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 9 18:34:56.628000 audit[1836]: NETFILTER_CFG table=mangle:42 family=2 entries=1 op=nft_register_chain pid=1836 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:56.628000 audit[1836]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc487de40 a2=0 a3=1 items=0 ppid=1789 pid=1836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.628000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 9 18:34:56.629000 audit[1837]: NETFILTER_CFG table=nat:43 family=10 entries=2 op=nft_register_chain pid=1837 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:34:56.629000 audit[1837]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffc96a8ea0 a2=0 a3=1 items=0 ppid=1789 pid=1837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.629000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 9 18:34:56.630000 audit[1838]: NETFILTER_CFG table=nat:44 family=2 entries=1 op=nft_register_chain pid=1838 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:56.630000 audit[1838]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc6936ea0 a2=0 a3=1 items=0 ppid=1789 pid=1838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.630000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 9 18:34:56.630000 audit[1839]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_chain pid=1839 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:56.630000 audit[1839]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd41ce210 a2=0 a3=1 items=0 ppid=1789 pid=1839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.630000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 9 18:34:56.632000 audit[1841]: NETFILTER_CFG table=nat:46 family=10 entries=1 op=nft_register_rule pid=1841 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:34:56.632000 audit[1841]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffd1ea2e40 a2=0 a3=1 items=0 ppid=1789 pid=1841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.632000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 9 18:34:56.633000 audit[1842]: NETFILTER_CFG table=filter:47 family=10 entries=2 op=nft_register_chain pid=1842 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:34:56.633000 audit[1842]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=ffffede48820 a2=0 a3=1 items=0 ppid=1789 pid=1842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.633000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 9 18:34:56.635000 audit[1844]: NETFILTER_CFG table=filter:48 family=10 entries=1 op=nft_register_rule pid=1844 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:34:56.635000 audit[1844]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=fffff8f34980 a2=0 a3=1 items=0 ppid=1789 pid=1844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.635000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 9 18:34:56.636000 audit[1845]: NETFILTER_CFG table=nat:49 family=10 entries=1 op=nft_register_chain pid=1845 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:34:56.636000 audit[1845]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd840eec0 a2=0 a3=1 items=0 ppid=1789 pid=1845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.636000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 9 18:34:56.637000 audit[1846]: NETFILTER_CFG table=nat:50 family=10 entries=1 op=nft_register_chain pid=1846 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:34:56.637000 audit[1846]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffff0bb9b0 a2=0 a3=1 items=0 ppid=1789 pid=1846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.637000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 9 18:34:56.639000 audit[1848]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_rule pid=1848 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:34:56.639000 audit[1848]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffebd5ea70 a2=0 a3=1 items=0 ppid=1789 pid=1848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.639000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 9 18:34:56.641000 audit[1850]: NETFILTER_CFG table=nat:52 family=10 entries=2 op=nft_register_chain pid=1850 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:34:56.641000 audit[1850]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffd22a8270 a2=0 a3=1 items=0 ppid=1789 pid=1850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.641000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 9 18:34:56.643000 audit[1852]: NETFILTER_CFG table=nat:53 family=10 entries=1 op=nft_register_rule pid=1852 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:34:56.643000 audit[1852]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=fffff186e340 a2=0 a3=1 items=0 ppid=1789 pid=1852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.643000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 9 18:34:56.645000 audit[1854]: NETFILTER_CFG table=nat:54 family=10 entries=1 op=nft_register_rule pid=1854 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:34:56.645000 audit[1854]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=ffffdffad010 a2=0 a3=1 items=0 ppid=1789 pid=1854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.645000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 9 18:34:56.647000 audit[1856]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_rule pid=1856 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:34:56.647000 audit[1856]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=556 a0=3 a1=ffffd7531e70 a2=0 a3=1 items=0 ppid=1789 pid=1856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.647000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 9 18:34:56.649472 kubelet[1789]: I0209 18:34:56.649441 1789 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 18:34:56.649472 kubelet[1789]: I0209 18:34:56.649465 1789 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 18:34:56.649537 kubelet[1789]: I0209 18:34:56.649481 1789 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 18:34:56.649537 kubelet[1789]: E0209 18:34:56.649525 1789 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 18:34:56.650234 kubelet[1789]: W0209 18:34:56.650188 1789 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Feb 9 18:34:56.650310 kubelet[1789]: E0209 18:34:56.650295 1789 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Feb 9 18:34:56.649000 audit[1857]: NETFILTER_CFG table=mangle:56 family=10 entries=1 op=nft_register_chain pid=1857 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:34:56.649000 audit[1857]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc514df10 a2=0 a3=1 items=0 ppid=1789 pid=1857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.649000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 9 18:34:56.649000 audit[1858]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=1858 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:34:56.649000 audit[1858]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffde73aed0 a2=0 a3=1 items=0 ppid=1789 pid=1858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.649000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 9 18:34:56.650000 audit[1859]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=1859 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:34:56.650000 audit[1859]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcfd50540 a2=0 a3=1 items=0 ppid=1789 pid=1859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.650000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 9 18:34:56.677405 kubelet[1789]: I0209 18:34:56.677375 1789 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 18:34:56.677813 kubelet[1789]: E0209 18:34:56.677795 1789 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Feb 9 18:34:56.749995 kubelet[1789]: I0209 18:34:56.749964 1789 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:34:56.751097 kubelet[1789]: I0209 18:34:56.751065 1789 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:34:56.752060 kubelet[1789]: I0209 18:34:56.752040 1789 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:34:56.753322 kubelet[1789]: I0209 18:34:56.753301 1789 status_manager.go:698] "Failed to get status for pod" podUID=5ca8aead999f2b726c4673ac6063b770 pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.89:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.89:6443: connect: connection refused" Feb 9 18:34:56.753508 kubelet[1789]: I0209 18:34:56.753490 1789 status_manager.go:698] "Failed to get status for pod" podUID=550020dd9f101bcc23e1d3c651841c4d pod="kube-system/kube-controller-manager-localhost" err="Get \"https://10.0.0.89:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.89:6443: connect: connection refused" Feb 9 18:34:56.756588 kubelet[1789]: I0209 18:34:56.756551 1789 status_manager.go:698] "Failed to get status for pod" podUID=72ae17a74a2eae76daac6d298477aff0 pod="kube-system/kube-scheduler-localhost" err="Get \"https://10.0.0.89:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.89:6443: connect: connection refused" Feb 9 18:34:56.777396 kubelet[1789]: I0209 18:34:56.777368 1789 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:34:56.777467 kubelet[1789]: I0209 18:34:56.777408 1789 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:34:56.777467 kubelet[1789]: I0209 18:34:56.777431 1789 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 9 18:34:56.777525 kubelet[1789]: I0209 18:34:56.777494 1789 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ca8aead999f2b726c4673ac6063b770-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5ca8aead999f2b726c4673ac6063b770\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:34:56.777549 kubelet[1789]: I0209 18:34:56.777536 1789 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ca8aead999f2b726c4673ac6063b770-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5ca8aead999f2b726c4673ac6063b770\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:34:56.777594 kubelet[1789]: I0209 18:34:56.777575 1789 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:34:56.777624 kubelet[1789]: I0209 18:34:56.777602 1789 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:34:56.777624 kubelet[1789]: I0209 18:34:56.777623 1789 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:34:56.777711 kubelet[1789]: I0209 18:34:56.777642 1789 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ca8aead999f2b726c4673ac6063b770-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5ca8aead999f2b726c4673ac6063b770\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:34:56.777938 kubelet[1789]: E0209 18:34:56.777901 1789 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.89:6443: connect: connection refused Feb 9 18:34:56.879182 kubelet[1789]: I0209 18:34:56.879080 1789 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 18:34:56.882303 kubelet[1789]: E0209 18:34:56.882279 1789 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Feb 9 18:34:57.057105 kubelet[1789]: E0209 18:34:57.057085 1789 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:34:57.058020 env[1223]: time="2024-02-09T18:34:57.057760891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5ca8aead999f2b726c4673ac6063b770,Namespace:kube-system,Attempt:0,}" Feb 9 18:34:57.059988 kubelet[1789]: E0209 18:34:57.059967 1789 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:34:57.060050 kubelet[1789]: E0209 18:34:57.060017 1789 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:34:57.060513 env[1223]: time="2024-02-09T18:34:57.060284780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,}" Feb 9 18:34:57.060513 env[1223]: time="2024-02-09T18:34:57.060409807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,}" Feb 9 18:34:57.178437 kubelet[1789]: E0209 18:34:57.178307 1789 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.89:6443: connect: connection refused Feb 9 18:34:57.283952 kubelet[1789]: I0209 18:34:57.283916 1789 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 18:34:57.284270 kubelet[1789]: E0209 18:34:57.284245 1789 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Feb 9 18:34:57.455536 kubelet[1789]: W0209 18:34:57.455398 1789 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.89:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Feb 9 18:34:57.455536 kubelet[1789]: E0209 18:34:57.455457 1789 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.89:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Feb 9 18:34:57.543318 kubelet[1789]: W0209 18:34:57.543261 1789 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Feb 9 18:34:57.543318 kubelet[1789]: E0209 18:34:57.543316 1789 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Feb 9 18:34:57.587890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3117537985.mount: Deactivated successfully. Feb 9 18:34:57.592871 env[1223]: time="2024-02-09T18:34:57.592833095Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:57.593777 env[1223]: time="2024-02-09T18:34:57.593732628Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:57.596763 env[1223]: time="2024-02-09T18:34:57.596735842Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:57.598221 env[1223]: time="2024-02-09T18:34:57.598197718Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:57.599492 env[1223]: time="2024-02-09T18:34:57.599466498Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:57.600225 env[1223]: time="2024-02-09T18:34:57.600199514Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:57.603613 env[1223]: time="2024-02-09T18:34:57.603588122Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:57.606627 env[1223]: time="2024-02-09T18:34:57.606558040Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:57.608001 env[1223]: time="2024-02-09T18:34:57.607975749Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:57.608666 env[1223]: time="2024-02-09T18:34:57.608643454Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:57.609666 env[1223]: time="2024-02-09T18:34:57.609634919Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:57.610769 env[1223]: time="2024-02-09T18:34:57.610739860Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:57.646046 env[1223]: time="2024-02-09T18:34:57.645971701Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:34:57.646046 env[1223]: time="2024-02-09T18:34:57.646012111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:34:57.646046 env[1223]: time="2024-02-09T18:34:57.646023023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:34:57.646517 env[1223]: time="2024-02-09T18:34:57.646467533Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/115d7d8e4b774db9fe4cb711dd58eefacc0f479030b54a3e508b2c70e756ea73 pid=1886 runtime=io.containerd.runc.v2 Feb 9 18:34:57.646581 env[1223]: time="2024-02-09T18:34:57.646532166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:34:57.646581 env[1223]: time="2024-02-09T18:34:57.646568219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:34:57.646643 env[1223]: time="2024-02-09T18:34:57.646580490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:34:57.646866 env[1223]: time="2024-02-09T18:34:57.646824589Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c34b46e34b8aee676f65c330b0c00e42f634022eab78aeb8b9715052a599cf44 pid=1885 runtime=io.containerd.runc.v2 Feb 9 18:34:57.647946 env[1223]: time="2024-02-09T18:34:57.647871493Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:34:57.647946 env[1223]: time="2024-02-09T18:34:57.647904029Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:34:57.647946 env[1223]: time="2024-02-09T18:34:57.647914061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:34:57.648084 env[1223]: time="2024-02-09T18:34:57.648029296Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/37a08a53781a1b875c9b55010e6fb55af5f203a6987c80de666afd794fdc2d55 pid=1887 runtime=io.containerd.runc.v2 Feb 9 18:34:57.700033 kubelet[1789]: W0209 18:34:57.699989 1789 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Feb 9 18:34:57.700033 kubelet[1789]: E0209 18:34:57.700030 1789 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Feb 9 18:34:57.722012 env[1223]: time="2024-02-09T18:34:57.721158122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,} returns sandbox id \"115d7d8e4b774db9fe4cb711dd58eefacc0f479030b54a3e508b2c70e756ea73\"" Feb 9 18:34:57.723458 kubelet[1789]: E0209 18:34:57.723436 1789 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:34:57.724771 env[1223]: time="2024-02-09T18:34:57.724735669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5ca8aead999f2b726c4673ac6063b770,Namespace:kube-system,Attempt:0,} returns sandbox id \"c34b46e34b8aee676f65c330b0c00e42f634022eab78aeb8b9715052a599cf44\"" Feb 9 18:34:57.725534 kubelet[1789]: E0209 18:34:57.725517 1789 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:34:57.726501 env[1223]: time="2024-02-09T18:34:57.726462629Z" level=info msg="CreateContainer within sandbox \"115d7d8e4b774db9fe4cb711dd58eefacc0f479030b54a3e508b2c70e756ea73\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 18:34:57.727536 env[1223]: time="2024-02-09T18:34:57.727501099Z" level=info msg="CreateContainer within sandbox \"c34b46e34b8aee676f65c330b0c00e42f634022eab78aeb8b9715052a599cf44\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 18:34:57.731467 env[1223]: time="2024-02-09T18:34:57.731430986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"37a08a53781a1b875c9b55010e6fb55af5f203a6987c80de666afd794fdc2d55\"" Feb 9 18:34:57.732576 kubelet[1789]: E0209 18:34:57.732462 1789 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:34:57.734163 env[1223]: time="2024-02-09T18:34:57.734129945Z" level=info msg="CreateContainer within sandbox \"37a08a53781a1b875c9b55010e6fb55af5f203a6987c80de666afd794fdc2d55\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 18:34:57.748108 env[1223]: time="2024-02-09T18:34:57.748052224Z" level=info msg="CreateContainer within sandbox \"c34b46e34b8aee676f65c330b0c00e42f634022eab78aeb8b9715052a599cf44\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cc9131cc8f325b9122bc52150e3514676d6c105913343d03b5ac1edfc723f03d\"" Feb 9 18:34:57.748715 env[1223]: time="2024-02-09T18:34:57.748686833Z" level=info msg="StartContainer for \"cc9131cc8f325b9122bc52150e3514676d6c105913343d03b5ac1edfc723f03d\"" Feb 9 18:34:57.748914 env[1223]: time="2024-02-09T18:34:57.748883647Z" level=info msg="CreateContainer within sandbox \"115d7d8e4b774db9fe4cb711dd58eefacc0f479030b54a3e508b2c70e756ea73\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"76e1887dac059400a2ef7d8a256f3903c63b5cee818ad293026c2b97ec50fa1c\"" Feb 9 18:34:57.749259 env[1223]: time="2024-02-09T18:34:57.749235027Z" level=info msg="StartContainer for \"76e1887dac059400a2ef7d8a256f3903c63b5cee818ad293026c2b97ec50fa1c\"" Feb 9 18:34:57.755035 env[1223]: time="2024-02-09T18:34:57.754986043Z" level=info msg="CreateContainer within sandbox \"37a08a53781a1b875c9b55010e6fb55af5f203a6987c80de666afd794fdc2d55\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"579cab1906e5adb48d9365a1b8c25704aa907c64931d46330b2047db94ca2133\"" Feb 9 18:34:57.755450 env[1223]: time="2024-02-09T18:34:57.755424478Z" level=info msg="StartContainer for \"579cab1906e5adb48d9365a1b8c25704aa907c64931d46330b2047db94ca2133\"" Feb 9 18:34:57.831102 env[1223]: time="2024-02-09T18:34:57.831061605Z" level=info msg="StartContainer for \"76e1887dac059400a2ef7d8a256f3903c63b5cee818ad293026c2b97ec50fa1c\" returns successfully" Feb 9 18:34:57.853857 env[1223]: time="2024-02-09T18:34:57.853818934Z" level=info msg="StartContainer for \"579cab1906e5adb48d9365a1b8c25704aa907c64931d46330b2047db94ca2133\" returns successfully" Feb 9 18:34:57.868664 env[1223]: time="2024-02-09T18:34:57.868586306Z" level=info msg="StartContainer for \"cc9131cc8f325b9122bc52150e3514676d6c105913343d03b5ac1edfc723f03d\" returns successfully" Feb 9 18:34:57.979543 kubelet[1789]: E0209 18:34:57.979417 1789 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.89:6443: connect: connection refused Feb 9 18:34:58.085302 kubelet[1789]: I0209 18:34:58.085268 1789 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 18:34:58.655269 kubelet[1789]: E0209 18:34:58.655233 1789 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:34:58.657135 kubelet[1789]: E0209 18:34:58.657059 1789 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:34:58.658698 kubelet[1789]: E0209 18:34:58.658628 1789 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:34:59.660298 kubelet[1789]: E0209 18:34:59.660259 1789 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:34:59.660709 kubelet[1789]: E0209 18:34:59.660683 1789 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:34:59.661242 kubelet[1789]: E0209 18:34:59.661214 1789 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:00.077516 kubelet[1789]: E0209 18:35:00.077478 1789 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 9 18:35:00.096923 kubelet[1789]: I0209 18:35:00.096890 1789 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 18:35:00.107653 kubelet[1789]: E0209 18:35:00.107592 1789 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 18:35:00.570293 kubelet[1789]: I0209 18:35:00.570258 1789 apiserver.go:52] "Watching apiserver" Feb 9 18:35:00.977447 kubelet[1789]: I0209 18:35:00.977010 1789 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 18:35:01.006276 kubelet[1789]: I0209 18:35:01.006199 1789 reconciler.go:41] "Reconciler: start to sync state" Feb 9 18:35:01.180461 kubelet[1789]: E0209 18:35:01.180427 1789 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:01.662451 kubelet[1789]: E0209 18:35:01.662409 1789 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:02.709898 systemd[1]: Reloading. Feb 9 18:35:02.754859 /usr/lib/systemd/system-generators/torcx-generator[2125]: time="2024-02-09T18:35:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:35:02.754886 /usr/lib/systemd/system-generators/torcx-generator[2125]: time="2024-02-09T18:35:02Z" level=info msg="torcx already run" Feb 9 18:35:02.815382 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:35:02.815485 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:35:02.833968 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:35:02.919579 systemd[1]: Stopping kubelet.service... Feb 9 18:35:02.940683 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 18:35:02.941087 systemd[1]: Stopped kubelet.service. Feb 9 18:35:02.943455 kernel: kauditd_printk_skb: 104 callbacks suppressed Feb 9 18:35:02.943502 kernel: audit: type=1131 audit(1707503702.939:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:02.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:02.945165 systemd[1]: Started kubelet.service. Feb 9 18:35:02.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:02.949401 kernel: audit: type=1130 audit(1707503702.943:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:02.988104 kubelet[2169]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 18:35:02.988104 kubelet[2169]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:35:02.988104 kubelet[2169]: I0209 18:35:02.988009 2169 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 18:35:02.989343 kubelet[2169]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 18:35:02.989343 kubelet[2169]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:35:02.993233 kubelet[2169]: I0209 18:35:02.993189 2169 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 18:35:02.993233 kubelet[2169]: I0209 18:35:02.993214 2169 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 18:35:02.993420 kubelet[2169]: I0209 18:35:02.993406 2169 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 18:35:02.994602 kubelet[2169]: I0209 18:35:02.994574 2169 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 18:35:02.995318 kubelet[2169]: I0209 18:35:02.995301 2169 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 18:35:02.996918 kubelet[2169]: W0209 18:35:02.996895 2169 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 18:35:02.997615 kubelet[2169]: I0209 18:35:02.997595 2169 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 18:35:02.997947 kubelet[2169]: I0209 18:35:02.997925 2169 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 18:35:02.997997 kubelet[2169]: I0209 18:35:02.997988 2169 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 18:35:02.998063 kubelet[2169]: I0209 18:35:02.998012 2169 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 18:35:02.998063 kubelet[2169]: I0209 18:35:02.998024 2169 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 18:35:02.998063 kubelet[2169]: I0209 18:35:02.998049 2169 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:35:03.000902 kubelet[2169]: I0209 18:35:03.000873 2169 kubelet.go:398] "Attempting to sync node with API server" Feb 9 18:35:03.000902 kubelet[2169]: I0209 18:35:03.000900 2169 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 18:35:03.000980 kubelet[2169]: I0209 18:35:03.000923 2169 kubelet.go:297] "Adding apiserver pod source" Feb 9 18:35:03.000980 kubelet[2169]: I0209 18:35:03.000938 2169 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 18:35:03.002488 kubelet[2169]: I0209 18:35:03.002455 2169 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 18:35:03.003214 kubelet[2169]: I0209 18:35:03.003178 2169 server.go:1186] "Started kubelet" Feb 9 18:35:03.004000 audit[2169]: AVC avc: denied { mac_admin } for pid=2169 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:35:03.006102 kubelet[2169]: I0209 18:35:03.006028 2169 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 9 18:35:03.006102 kubelet[2169]: I0209 18:35:03.006073 2169 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 9 18:35:03.006102 kubelet[2169]: I0209 18:35:03.006093 2169 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 18:35:03.004000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 18:35:03.008705 kernel: audit: type=1400 audit(1707503703.004:218): avc: denied { mac_admin } for pid=2169 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:35:03.008776 kernel: audit: type=1401 audit(1707503703.004:218): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 18:35:03.008794 kernel: audit: type=1300 audit(1707503703.004:218): arch=c00000b7 syscall=5 success=no exit=-22 a0=400103c750 a1=4000de2a98 a2=400103c720 a3=25 items=0 ppid=1 pid=2169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:03.004000 audit[2169]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=400103c750 a1=4000de2a98 a2=400103c720 a3=25 items=0 ppid=1 pid=2169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:03.010160 kubelet[2169]: E0209 18:35:03.010131 2169 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 18:35:03.010256 kubelet[2169]: E0209 18:35:03.010245 2169 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 18:35:03.011288 kernel: audit: type=1327 audit(1707503703.004:218): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 18:35:03.004000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 18:35:03.011956 kubelet[2169]: I0209 18:35:03.011936 2169 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 18:35:03.012727 kubelet[2169]: I0209 18:35:03.012704 2169 server.go:451] "Adding debug handlers to kubelet server" Feb 9 18:35:03.004000 audit[2169]: AVC avc: denied { mac_admin } for pid=2169 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:35:03.015446 kernel: audit: type=1400 audit(1707503703.004:219): avc: denied { mac_admin } for pid=2169 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:35:03.015499 kernel: audit: type=1401 audit(1707503703.004:219): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 18:35:03.004000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 18:35:03.004000 audit[2169]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000cfb040 a1=4000de2ab0 a2=400103c7e0 a3=25 items=0 ppid=1 pid=2169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:03.019695 kernel: audit: type=1300 audit(1707503703.004:219): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000cfb040 a1=4000de2ab0 a2=400103c7e0 a3=25 items=0 ppid=1 pid=2169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:03.019788 kernel: audit: type=1327 audit(1707503703.004:219): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 18:35:03.004000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 18:35:03.026539 kubelet[2169]: I0209 18:35:03.026513 2169 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 18:35:03.027017 kubelet[2169]: I0209 18:35:03.026950 2169 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 18:35:03.065985 kubelet[2169]: I0209 18:35:03.065933 2169 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 18:35:03.084202 kubelet[2169]: I0209 18:35:03.084178 2169 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 18:35:03.084319 kubelet[2169]: I0209 18:35:03.084307 2169 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 18:35:03.084396 kubelet[2169]: I0209 18:35:03.084385 2169 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:35:03.084580 kubelet[2169]: I0209 18:35:03.084566 2169 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 18:35:03.084671 kubelet[2169]: I0209 18:35:03.084659 2169 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 18:35:03.084724 kubelet[2169]: I0209 18:35:03.084715 2169 policy_none.go:49] "None policy: Start" Feb 9 18:35:03.085814 kubelet[2169]: I0209 18:35:03.085797 2169 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 18:35:03.085905 kubelet[2169]: I0209 18:35:03.085894 2169 state_mem.go:35] "Initializing new in-memory state store" Feb 9 18:35:03.086104 kubelet[2169]: I0209 18:35:03.086088 2169 state_mem.go:75] "Updated machine memory state" Feb 9 18:35:03.086645 kubelet[2169]: I0209 18:35:03.086620 2169 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 18:35:03.086645 kubelet[2169]: I0209 18:35:03.086640 2169 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 18:35:03.086722 kubelet[2169]: I0209 18:35:03.086655 2169 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 18:35:03.086722 kubelet[2169]: E0209 18:35:03.086715 2169 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 18:35:03.087730 kubelet[2169]: I0209 18:35:03.087712 2169 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 18:35:03.086000 audit[2169]: AVC avc: denied { mac_admin } for pid=2169 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:35:03.086000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 18:35:03.088031 kubelet[2169]: I0209 18:35:03.088012 2169 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 9 18:35:03.086000 audit[2169]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40007429f0 a1=4000de23d8 a2=40007429c0 a3=25 items=0 ppid=1 pid=2169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:03.086000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 18:35:03.090732 kubelet[2169]: I0209 18:35:03.090714 2169 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 18:35:03.187799 kubelet[2169]: I0209 18:35:03.187752 2169 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:35:03.187915 kubelet[2169]: I0209 18:35:03.187841 2169 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:35:03.187915 kubelet[2169]: I0209 18:35:03.187873 2169 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:35:03.192144 kubelet[2169]: I0209 18:35:03.192112 2169 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 18:35:03.194740 kubelet[2169]: E0209 18:35:03.194721 2169 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 9 18:35:03.328371 kubelet[2169]: I0209 18:35:03.328319 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ca8aead999f2b726c4673ac6063b770-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5ca8aead999f2b726c4673ac6063b770\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:35:03.328563 kubelet[2169]: I0209 18:35:03.328549 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ca8aead999f2b726c4673ac6063b770-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5ca8aead999f2b726c4673ac6063b770\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:35:03.328703 kubelet[2169]: I0209 18:35:03.328660 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:35:03.328769 kubelet[2169]: I0209 18:35:03.328723 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:35:03.328769 kubelet[2169]: I0209 18:35:03.328758 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 9 18:35:03.328821 kubelet[2169]: I0209 18:35:03.328785 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ca8aead999f2b726c4673ac6063b770-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5ca8aead999f2b726c4673ac6063b770\") " pod="kube-system/kube-apiserver-localhost" Feb 9 18:35:03.328821 kubelet[2169]: I0209 18:35:03.328805 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:35:03.328891 kubelet[2169]: I0209 18:35:03.328863 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:35:03.328974 kubelet[2169]: I0209 18:35:03.328962 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 18:35:03.493083 kubelet[2169]: E0209 18:35:03.493054 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:03.493562 kubelet[2169]: E0209 18:35:03.493542 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:03.495996 kubelet[2169]: E0209 18:35:03.495966 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:03.806126 kubelet[2169]: I0209 18:35:03.806091 2169 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Feb 9 18:35:03.806274 kubelet[2169]: I0209 18:35:03.806179 2169 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 18:35:04.002243 kubelet[2169]: I0209 18:35:04.002204 2169 apiserver.go:52] "Watching apiserver" Feb 9 18:35:04.027663 kubelet[2169]: I0209 18:35:04.027619 2169 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 18:35:04.031785 kubelet[2169]: I0209 18:35:04.031743 2169 reconciler.go:41] "Reconciler: start to sync state" Feb 9 18:35:04.406065 kubelet[2169]: E0209 18:35:04.406031 2169 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 9 18:35:04.406544 kubelet[2169]: E0209 18:35:04.406530 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:04.606713 kubelet[2169]: E0209 18:35:04.606654 2169 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 9 18:35:04.606960 kubelet[2169]: E0209 18:35:04.606932 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:04.807450 kubelet[2169]: E0209 18:35:04.807411 2169 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 9 18:35:04.807824 kubelet[2169]: E0209 18:35:04.807809 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:05.015961 kubelet[2169]: I0209 18:35:05.015763 2169 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.01572622 pod.CreationTimestamp="2024-02-09 18:35:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:35:05.014588489 +0000 UTC m=+2.064793936" watchObservedRunningTime="2024-02-09 18:35:05.01572622 +0000 UTC m=+2.065931667" Feb 9 18:35:05.099533 kubelet[2169]: E0209 18:35:05.099430 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:05.099995 kubelet[2169]: E0209 18:35:05.099967 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:05.100580 kubelet[2169]: E0209 18:35:05.100555 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:05.406736 kubelet[2169]: I0209 18:35:05.406630 2169 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.406597129 pod.CreationTimestamp="2024-02-09 18:35:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:35:05.406582408 +0000 UTC m=+2.456787855" watchObservedRunningTime="2024-02-09 18:35:05.406597129 +0000 UTC m=+2.456802576" Feb 9 18:35:06.100856 kubelet[2169]: E0209 18:35:06.100828 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:06.101170 kubelet[2169]: E0209 18:35:06.100936 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:06.224160 sudo[1377]: pam_unix(sudo:session): session closed for user root Feb 9 18:35:06.223000 audit[1377]: USER_END pid=1377 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 18:35:06.223000 audit[1377]: CRED_DISP pid=1377 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 18:35:06.225561 sshd[1372]: pam_unix(sshd:session): session closed for user core Feb 9 18:35:06.226000 audit[1372]: USER_END pid=1372 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:06.226000 audit[1372]: CRED_DISP pid=1372 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:06.228616 systemd-logind[1201]: Session 7 logged out. Waiting for processes to exit. Feb 9 18:35:06.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.89:22-10.0.0.1:59020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:06.228724 systemd[1]: sshd@6-10.0.0.89:22-10.0.0.1:59020.service: Deactivated successfully. Feb 9 18:35:06.229545 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 18:35:06.230019 systemd-logind[1201]: Removed session 7. Feb 9 18:35:09.291148 kubelet[2169]: E0209 18:35:09.291083 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:09.306209 kubelet[2169]: I0209 18:35:09.306181 2169 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=6.306146505 pod.CreationTimestamp="2024-02-09 18:35:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:35:05.815567892 +0000 UTC m=+2.865773339" watchObservedRunningTime="2024-02-09 18:35:09.306146505 +0000 UTC m=+6.356351952" Feb 9 18:35:10.106441 kubelet[2169]: E0209 18:35:10.106414 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:11.145797 kubelet[2169]: E0209 18:35:11.145765 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:12.108744 kubelet[2169]: E0209 18:35:12.108692 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:13.113243 kubelet[2169]: E0209 18:35:13.113201 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:14.036528 update_engine[1203]: I0209 18:35:14.036473 1203 update_attempter.cc:509] Updating boot flags... Feb 9 18:35:15.250840 kubelet[2169]: E0209 18:35:15.250812 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:16.664515 kubelet[2169]: I0209 18:35:16.664482 2169 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 18:35:16.665093 env[1223]: time="2024-02-09T18:35:16.665054757Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 18:35:16.665913 kubelet[2169]: I0209 18:35:16.665888 2169 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 18:35:17.062642 kubelet[2169]: I0209 18:35:17.062587 2169 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:35:17.216094 kubelet[2169]: I0209 18:35:17.216043 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/87e703ba-2c22-42d8-b5e3-ab0caf1c03ec-xtables-lock\") pod \"kube-proxy-tr7l8\" (UID: \"87e703ba-2c22-42d8-b5e3-ab0caf1c03ec\") " pod="kube-system/kube-proxy-tr7l8" Feb 9 18:35:17.216094 kubelet[2169]: I0209 18:35:17.216089 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87e703ba-2c22-42d8-b5e3-ab0caf1c03ec-lib-modules\") pod \"kube-proxy-tr7l8\" (UID: \"87e703ba-2c22-42d8-b5e3-ab0caf1c03ec\") " pod="kube-system/kube-proxy-tr7l8" Feb 9 18:35:17.216261 kubelet[2169]: I0209 18:35:17.216114 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r8l6\" (UniqueName: \"kubernetes.io/projected/87e703ba-2c22-42d8-b5e3-ab0caf1c03ec-kube-api-access-4r8l6\") pod \"kube-proxy-tr7l8\" (UID: \"87e703ba-2c22-42d8-b5e3-ab0caf1c03ec\") " pod="kube-system/kube-proxy-tr7l8" Feb 9 18:35:17.216261 kubelet[2169]: I0209 18:35:17.216137 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/87e703ba-2c22-42d8-b5e3-ab0caf1c03ec-kube-proxy\") pod \"kube-proxy-tr7l8\" (UID: \"87e703ba-2c22-42d8-b5e3-ab0caf1c03ec\") " pod="kube-system/kube-proxy-tr7l8" Feb 9 18:35:17.281861 kubelet[2169]: I0209 18:35:17.281814 2169 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:35:17.366615 kubelet[2169]: E0209 18:35:17.366526 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:17.367110 env[1223]: time="2024-02-09T18:35:17.367061331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tr7l8,Uid:87e703ba-2c22-42d8-b5e3-ab0caf1c03ec,Namespace:kube-system,Attempt:0,}" Feb 9 18:35:17.381198 env[1223]: time="2024-02-09T18:35:17.381146230Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:35:17.381343 env[1223]: time="2024-02-09T18:35:17.381184911Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:35:17.381343 env[1223]: time="2024-02-09T18:35:17.381195031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:35:17.381343 env[1223]: time="2024-02-09T18:35:17.381318874Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/79ef0c82ca5b6ee4ca22db23149d1c66c793e6d3246708880fc17be18622bf32 pid=2302 runtime=io.containerd.runc.v2 Feb 9 18:35:17.397237 systemd[1]: run-containerd-runc-k8s.io-79ef0c82ca5b6ee4ca22db23149d1c66c793e6d3246708880fc17be18622bf32-runc.KPB0XX.mount: Deactivated successfully. Feb 9 18:35:17.418062 kubelet[2169]: I0209 18:35:17.417954 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdbr8\" (UniqueName: \"kubernetes.io/projected/7733bb18-958d-4cc5-ad38-7bb11c3bad01-kube-api-access-mdbr8\") pod \"tigera-operator-cfc98749c-f4xrr\" (UID: \"7733bb18-958d-4cc5-ad38-7bb11c3bad01\") " pod="tigera-operator/tigera-operator-cfc98749c-f4xrr" Feb 9 18:35:17.418062 kubelet[2169]: I0209 18:35:17.418000 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7733bb18-958d-4cc5-ad38-7bb11c3bad01-var-lib-calico\") pod \"tigera-operator-cfc98749c-f4xrr\" (UID: \"7733bb18-958d-4cc5-ad38-7bb11c3bad01\") " pod="tigera-operator/tigera-operator-cfc98749c-f4xrr" Feb 9 18:35:17.427415 env[1223]: time="2024-02-09T18:35:17.427337304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tr7l8,Uid:87e703ba-2c22-42d8-b5e3-ab0caf1c03ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"79ef0c82ca5b6ee4ca22db23149d1c66c793e6d3246708880fc17be18622bf32\"" Feb 9 18:35:17.427993 kubelet[2169]: E0209 18:35:17.427971 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:17.429907 env[1223]: time="2024-02-09T18:35:17.429873486Z" level=info msg="CreateContainer within sandbox \"79ef0c82ca5b6ee4ca22db23149d1c66c793e6d3246708880fc17be18622bf32\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 18:35:17.440618 env[1223]: time="2024-02-09T18:35:17.440570384Z" level=info msg="CreateContainer within sandbox \"79ef0c82ca5b6ee4ca22db23149d1c66c793e6d3246708880fc17be18622bf32\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f3ac9725669697a4fa9347671c5e98a97e06d2f58d61f848f61490d1be04f17b\"" Feb 9 18:35:17.441104 env[1223]: time="2024-02-09T18:35:17.441077436Z" level=info msg="StartContainer for \"f3ac9725669697a4fa9347671c5e98a97e06d2f58d61f848f61490d1be04f17b\"" Feb 9 18:35:17.517887 env[1223]: time="2024-02-09T18:35:17.517842048Z" level=info msg="StartContainer for \"f3ac9725669697a4fa9347671c5e98a97e06d2f58d61f848f61490d1be04f17b\" returns successfully" Feb 9 18:35:17.584923 env[1223]: time="2024-02-09T18:35:17.584872344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-cfc98749c-f4xrr,Uid:7733bb18-958d-4cc5-ad38-7bb11c3bad01,Namespace:tigera-operator,Attempt:0,}" Feb 9 18:35:17.597316 env[1223]: time="2024-02-09T18:35:17.597247003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:35:17.597456 env[1223]: time="2024-02-09T18:35:17.597293364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:35:17.597456 env[1223]: time="2024-02-09T18:35:17.597303884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:35:17.597557 env[1223]: time="2024-02-09T18:35:17.597468888Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0c9034a1863228defa24f162197bb7566faaf4fce99d814c9bf40187878163c0 pid=2380 runtime=io.containerd.runc.v2 Feb 9 18:35:17.638000 audit[2425]: NETFILTER_CFG table=mangle:59 family=10 entries=1 op=nft_register_chain pid=2425 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:17.640797 kernel: kauditd_printk_skb: 9 callbacks suppressed Feb 9 18:35:17.640860 kernel: audit: type=1325 audit(1707503717.638:226): table=mangle:59 family=10 entries=1 op=nft_register_chain pid=2425 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:17.638000 audit[2425]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc28af2e0 a2=0 a3=ffff8821f6c0 items=0 ppid=2354 pid=2425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.645060 kernel: audit: type=1300 audit(1707503717.638:226): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc28af2e0 a2=0 a3=ffff8821f6c0 items=0 ppid=2354 pid=2425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.645102 kernel: audit: type=1327 audit(1707503717.638:226): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 18:35:17.638000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 18:35:17.639000 audit[2423]: NETFILTER_CFG table=mangle:60 family=2 entries=1 op=nft_register_chain pid=2423 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:17.647763 kernel: audit: type=1325 audit(1707503717.639:227): table=mangle:60 family=2 entries=1 op=nft_register_chain pid=2423 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:17.647817 kernel: audit: type=1300 audit(1707503717.639:227): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc1cf0260 a2=0 a3=ffffaff956c0 items=0 ppid=2354 pid=2423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.639000 audit[2423]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc1cf0260 a2=0 a3=ffffaff956c0 items=0 ppid=2354 pid=2423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.639000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 18:35:17.652398 kernel: audit: type=1327 audit(1707503717.639:227): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 18:35:17.650000 audit[2430]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2430 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:17.656297 kernel: audit: type=1325 audit(1707503717.650:228): table=nat:61 family=2 entries=1 op=nft_register_chain pid=2430 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:17.656367 kernel: audit: type=1300 audit(1707503717.650:228): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff694cc30 a2=0 a3=ffffb9ff86c0 items=0 ppid=2354 pid=2430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.650000 audit[2430]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff694cc30 a2=0 a3=ffffb9ff86c0 items=0 ppid=2354 pid=2430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.656647 kernel: audit: type=1327 audit(1707503717.650:228): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 18:35:17.650000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 18:35:17.657829 kernel: audit: type=1325 audit(1707503717.651:229): table=nat:62 family=10 entries=1 op=nft_register_chain pid=2431 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:17.651000 audit[2431]: NETFILTER_CFG table=nat:62 family=10 entries=1 op=nft_register_chain pid=2431 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:17.651000 audit[2431]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd8ab4330 a2=0 a3=ffff94db46c0 items=0 ppid=2354 pid=2431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.651000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 18:35:17.651000 audit[2432]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=2432 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:17.651000 audit[2432]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffa7ffc60 a2=0 a3=ffffb945f6c0 items=0 ppid=2354 pid=2432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.651000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 18:35:17.651000 audit[2433]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_chain pid=2433 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:17.651000 audit[2433]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdd1e1e10 a2=0 a3=ffffb09f86c0 items=0 ppid=2354 pid=2433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.651000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 18:35:17.689579 env[1223]: time="2024-02-09T18:35:17.689539629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-cfc98749c-f4xrr,Uid:7733bb18-958d-4cc5-ad38-7bb11c3bad01,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0c9034a1863228defa24f162197bb7566faaf4fce99d814c9bf40187878163c0\"" Feb 9 18:35:17.692103 env[1223]: time="2024-02-09T18:35:17.692029249Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.32.3\"" Feb 9 18:35:17.741000 audit[2441]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=2441 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:17.741000 audit[2441]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffdd3f3ff0 a2=0 a3=ffff9c2046c0 items=0 ppid=2354 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.741000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 9 18:35:17.743000 audit[2443]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=2443 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:17.743000 audit[2443]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffca142570 a2=0 a3=ffff8adaa6c0 items=0 ppid=2354 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.743000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Feb 9 18:35:17.746000 audit[2446]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=2446 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:17.746000 audit[2446]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffdf907c20 a2=0 a3=ffff9dfea6c0 items=0 ppid=2354 pid=2446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.746000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Feb 9 18:35:17.747000 audit[2447]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=2447 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:17.747000 audit[2447]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc54b3230 a2=0 a3=ffffb3d5f6c0 items=0 ppid=2354 pid=2447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.747000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 9 18:35:17.749000 audit[2449]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=2449 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:17.749000 audit[2449]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc40e70f0 a2=0 a3=ffff885256c0 items=0 ppid=2354 pid=2449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.749000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 9 18:35:17.750000 audit[2450]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=2450 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:17.750000 audit[2450]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd3fc6090 a2=0 a3=ffffbe3e66c0 items=0 ppid=2354 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.750000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 9 18:35:17.752000 audit[2452]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=2452 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:17.752000 audit[2452]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff01bf610 a2=0 a3=ffffadf206c0 items=0 ppid=2354 pid=2452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.752000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 9 18:35:17.755000 audit[2455]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=2455 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:17.755000 audit[2455]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff10c9030 a2=0 a3=ffffa9acb6c0 items=0 ppid=2354 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.755000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Feb 9 18:35:17.756000 audit[2456]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=2456 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:17.756000 audit[2456]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd885f850 a2=0 a3=ffff871146c0 items=0 ppid=2354 pid=2456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.756000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 9 18:35:17.758000 audit[2458]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=2458 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:17.758000 audit[2458]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffea8b5c80 a2=0 a3=ffff951616c0 items=0 ppid=2354 pid=2458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.758000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 9 18:35:17.759000 audit[2459]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_chain pid=2459 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:17.759000 audit[2459]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffeb29cfe0 a2=0 a3=ffffb77066c0 items=0 ppid=2354 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.759000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 9 18:35:17.761000 audit[2461]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=2461 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:17.761000 audit[2461]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe07cb950 a2=0 a3=ffff898716c0 items=0 ppid=2354 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.761000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 18:35:17.765000 audit[2464]: NETFILTER_CFG table=filter:77 family=2 entries=1 op=nft_register_rule pid=2464 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:17.765000 audit[2464]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffcc11f40 a2=0 a3=ffff9308d6c0 items=0 ppid=2354 pid=2464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.765000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 18:35:17.768000 audit[2467]: NETFILTER_CFG table=filter:78 family=2 entries=1 op=nft_register_rule pid=2467 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:17.768000 audit[2467]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffdebccf0 a2=0 a3=ffff99e186c0 items=0 ppid=2354 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.768000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 9 18:35:17.769000 audit[2468]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_chain pid=2468 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:17.769000 audit[2468]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd1226cc0 a2=0 a3=ffff9e1886c0 items=0 ppid=2354 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.769000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 9 18:35:17.771000 audit[2470]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_rule pid=2470 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:17.771000 audit[2470]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffc75d0500 a2=0 a3=ffffbd6506c0 items=0 ppid=2354 pid=2470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.771000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 18:35:17.774000 audit[2473]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=2473 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:17.774000 audit[2473]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffca469960 a2=0 a3=ffff9f90b6c0 items=0 ppid=2354 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.774000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 18:35:17.787000 audit[2477]: NETFILTER_CFG table=filter:82 family=2 entries=6 op=nft_register_rule pid=2477 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:35:17.787000 audit[2477]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=ffffd916ab80 a2=0 a3=ffffba74e6c0 items=0 ppid=2354 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.787000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:17.792000 audit[2477]: NETFILTER_CFG table=nat:83 family=2 entries=17 op=nft_register_chain pid=2477 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:35:17.792000 audit[2477]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffd916ab80 a2=0 a3=ffffba74e6c0 items=0 ppid=2354 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.792000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:17.793000 audit[2482]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2482 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:17.793000 audit[2482]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffc3a54ff0 a2=0 a3=ffff9dc476c0 items=0 ppid=2354 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.793000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 9 18:35:17.795000 audit[2484]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=2484 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:17.795000 audit[2484]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffc1ace280 a2=0 a3=ffffaf6286c0 items=0 ppid=2354 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.795000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Feb 9 18:35:17.799000 audit[2487]: NETFILTER_CFG table=filter:86 family=10 entries=2 op=nft_register_chain pid=2487 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:17.799000 audit[2487]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffe93e3460 a2=0 a3=ffffb00536c0 items=0 ppid=2354 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.799000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Feb 9 18:35:17.799000 audit[2488]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=2488 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:17.799000 audit[2488]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc1134440 a2=0 a3=ffff8c61e6c0 items=0 ppid=2354 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.799000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 9 18:35:17.801000 audit[2490]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=2490 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:17.801000 audit[2490]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc604f110 a2=0 a3=ffff967336c0 items=0 ppid=2354 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.801000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 9 18:35:17.802000 audit[2491]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=2491 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:17.802000 audit[2491]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc78d7e90 a2=0 a3=ffff826a86c0 items=0 ppid=2354 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.802000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 9 18:35:17.804000 audit[2493]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=2493 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:17.804000 audit[2493]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd0148900 a2=0 a3=ffffa1da56c0 items=0 ppid=2354 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.804000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Feb 9 18:35:17.807000 audit[2496]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=2496 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:17.807000 audit[2496]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffdb54b300 a2=0 a3=ffff97d2c6c0 items=0 ppid=2354 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.807000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 9 18:35:17.808000 audit[2497]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=2497 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:17.808000 audit[2497]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff3dab210 a2=0 a3=ffff8d6e96c0 items=0 ppid=2354 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.808000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 9 18:35:17.810000 audit[2499]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=2499 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:17.810000 audit[2499]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffeea3d0d0 a2=0 a3=ffff8d4326c0 items=0 ppid=2354 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.810000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 9 18:35:17.811000 audit[2500]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=2500 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:17.811000 audit[2500]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff273b470 a2=0 a3=ffffaf73e6c0 items=0 ppid=2354 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.811000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 9 18:35:17.813000 audit[2502]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=2502 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:17.813000 audit[2502]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe22e3dc0 a2=0 a3=ffff9193b6c0 items=0 ppid=2354 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.813000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 18:35:17.816000 audit[2505]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=2505 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:17.816000 audit[2505]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc9503f20 a2=0 a3=ffffb52126c0 items=0 ppid=2354 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.816000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 9 18:35:17.819000 audit[2508]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=2508 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:17.819000 audit[2508]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc1788560 a2=0 a3=ffffa54406c0 items=0 ppid=2354 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.819000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Feb 9 18:35:17.820000 audit[2509]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=2509 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:17.820000 audit[2509]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffeafb66c0 a2=0 a3=ffff987856c0 items=0 ppid=2354 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.820000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 9 18:35:17.822000 audit[2511]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=2511 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:17.822000 audit[2511]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffefd13d80 a2=0 a3=ffff81d356c0 items=0 ppid=2354 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.822000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 18:35:17.825000 audit[2514]: NETFILTER_CFG table=nat:100 family=10 entries=2 op=nft_register_chain pid=2514 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:17.825000 audit[2514]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffd4182020 a2=0 a3=ffffa46966c0 items=0 ppid=2354 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.825000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 18:35:17.829000 audit[2518]: NETFILTER_CFG table=filter:101 family=10 entries=3 op=nft_register_rule pid=2518 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 9 18:35:17.829000 audit[2518]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffcb7edaa0 a2=0 a3=ffffa44d06c0 items=0 ppid=2354 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.829000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:17.830000 audit[2518]: NETFILTER_CFG table=nat:102 family=10 entries=10 op=nft_register_chain pid=2518 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 9 18:35:17.830000 audit[2518]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1968 a0=3 a1=ffffcb7edaa0 a2=0 a3=ffffa44d06c0 items=0 ppid=2354 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:17.830000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:18.123705 kubelet[2169]: E0209 18:35:18.122737 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:18.131962 kubelet[2169]: I0209 18:35:18.131762 2169 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-tr7l8" podStartSLOduration=1.13172199 pod.CreationTimestamp="2024-02-09 18:35:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:35:18.131555706 +0000 UTC m=+15.181761153" watchObservedRunningTime="2024-02-09 18:35:18.13172199 +0000 UTC m=+15.181927397" Feb 9 18:35:18.494002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1537846841.mount: Deactivated successfully. Feb 9 18:35:19.124377 kubelet[2169]: E0209 18:35:19.124177 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:19.199367 env[1223]: time="2024-02-09T18:35:19.199316024Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:19.201742 env[1223]: time="2024-02-09T18:35:19.201705316Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c7a10ec867a90652f951a6ba5a12efb94165e0a1c9b72167810d1065e57d768f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:19.203507 env[1223]: time="2024-02-09T18:35:19.203483915Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:19.204220 env[1223]: time="2024-02-09T18:35:19.204183611Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.32.3\" returns image reference \"sha256:c7a10ec867a90652f951a6ba5a12efb94165e0a1c9b72167810d1065e57d768f\"" Feb 9 18:35:19.206053 env[1223]: time="2024-02-09T18:35:19.204912947Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:715ac9a30f8a9579e44258af20de354715429e11836b493918e9e1a696e9b028,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:19.206716 env[1223]: time="2024-02-09T18:35:19.206497941Z" level=info msg="CreateContainer within sandbox \"0c9034a1863228defa24f162197bb7566faaf4fce99d814c9bf40187878163c0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 9 18:35:19.216051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2736122484.mount: Deactivated successfully. Feb 9 18:35:19.216609 env[1223]: time="2024-02-09T18:35:19.216495761Z" level=info msg="CreateContainer within sandbox \"0c9034a1863228defa24f162197bb7566faaf4fce99d814c9bf40187878163c0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"fe31302a10055bebd4c63a75b856a12c8c651ea9500435a0494eefaf19b83c57\"" Feb 9 18:35:19.216946 env[1223]: time="2024-02-09T18:35:19.216920810Z" level=info msg="StartContainer for \"fe31302a10055bebd4c63a75b856a12c8c651ea9500435a0494eefaf19b83c57\"" Feb 9 18:35:19.285019 env[1223]: time="2024-02-09T18:35:19.284929704Z" level=info msg="StartContainer for \"fe31302a10055bebd4c63a75b856a12c8c651ea9500435a0494eefaf19b83c57\" returns successfully" Feb 9 18:35:20.133182 kubelet[2169]: I0209 18:35:20.133150 2169 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-cfc98749c-f4xrr" podStartSLOduration=-9.223372033721668e+09 pod.CreationTimestamp="2024-02-09 18:35:17 +0000 UTC" firstStartedPulling="2024-02-09 18:35:17.691572238 +0000 UTC m=+14.741777685" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:35:20.132901875 +0000 UTC m=+17.183107282" watchObservedRunningTime="2024-02-09 18:35:20.13310832 +0000 UTC m=+17.183313767" Feb 9 18:35:22.600000 audit[2585]: NETFILTER_CFG table=filter:103 family=2 entries=13 op=nft_register_rule pid=2585 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:35:22.600000 audit[2585]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=ffffd4fb0590 a2=0 a3=ffff9cd196c0 items=0 ppid=2354 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:22.600000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:22.601000 audit[2585]: NETFILTER_CFG table=nat:104 family=2 entries=20 op=nft_register_rule pid=2585 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:35:22.601000 audit[2585]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffd4fb0590 a2=0 a3=ffff9cd196c0 items=0 ppid=2354 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:22.601000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:22.636000 audit[2611]: NETFILTER_CFG table=filter:105 family=2 entries=14 op=nft_register_rule pid=2611 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:35:22.636000 audit[2611]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=ffffe67ec670 a2=0 a3=ffffa8e266c0 items=0 ppid=2354 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:22.636000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:22.649389 kernel: kauditd_printk_skb: 131 callbacks suppressed Feb 9 18:35:22.649474 kernel: audit: type=1325 audit(1707503722.637:273): table=nat:106 family=2 entries=20 op=nft_register_rule pid=2611 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:35:22.637000 audit[2611]: NETFILTER_CFG table=nat:106 family=2 entries=20 op=nft_register_rule pid=2611 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:35:22.637000 audit[2611]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffe67ec670 a2=0 a3=ffffa8e266c0 items=0 ppid=2354 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:22.652399 kernel: audit: type=1300 audit(1707503722.637:273): arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffe67ec670 a2=0 a3=ffffa8e266c0 items=0 ppid=2354 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:22.652457 kernel: audit: type=1327 audit(1707503722.637:273): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:22.637000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:22.695946 kubelet[2169]: I0209 18:35:22.695895 2169 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:35:22.742408 kubelet[2169]: I0209 18:35:22.742233 2169 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:35:22.851331 kubelet[2169]: I0209 18:35:22.851247 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/27a77f3e-a849-4edc-be13-95759836676d-var-run-calico\") pod \"calico-node-22rnx\" (UID: \"27a77f3e-a849-4edc-be13-95759836676d\") " pod="calico-system/calico-node-22rnx" Feb 9 18:35:22.851521 kubelet[2169]: I0209 18:35:22.851508 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/27a77f3e-a849-4edc-be13-95759836676d-flexvol-driver-host\") pod \"calico-node-22rnx\" (UID: \"27a77f3e-a849-4edc-be13-95759836676d\") " pod="calico-system/calico-node-22rnx" Feb 9 18:35:22.851647 kubelet[2169]: I0209 18:35:22.851631 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6k92\" (UniqueName: \"kubernetes.io/projected/27a77f3e-a849-4edc-be13-95759836676d-kube-api-access-d6k92\") pod \"calico-node-22rnx\" (UID: \"27a77f3e-a849-4edc-be13-95759836676d\") " pod="calico-system/calico-node-22rnx" Feb 9 18:35:22.851817 kubelet[2169]: I0209 18:35:22.851775 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/27a77f3e-a849-4edc-be13-95759836676d-cni-net-dir\") pod \"calico-node-22rnx\" (UID: \"27a77f3e-a849-4edc-be13-95759836676d\") " pod="calico-system/calico-node-22rnx" Feb 9 18:35:22.851932 kubelet[2169]: I0209 18:35:22.851921 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/27a77f3e-a849-4edc-be13-95759836676d-cni-log-dir\") pod \"calico-node-22rnx\" (UID: \"27a77f3e-a849-4edc-be13-95759836676d\") " pod="calico-system/calico-node-22rnx" Feb 9 18:35:22.852105 kubelet[2169]: I0209 18:35:22.852047 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pg6p\" (UniqueName: \"kubernetes.io/projected/9a010e29-dc62-4fa4-ab1d-be7ab556ce60-kube-api-access-5pg6p\") pod \"calico-typha-559456cc49-sz6bl\" (UID: \"9a010e29-dc62-4fa4-ab1d-be7ab556ce60\") " pod="calico-system/calico-typha-559456cc49-sz6bl" Feb 9 18:35:22.852259 kubelet[2169]: I0209 18:35:22.852228 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a010e29-dc62-4fa4-ab1d-be7ab556ce60-tigera-ca-bundle\") pod \"calico-typha-559456cc49-sz6bl\" (UID: \"9a010e29-dc62-4fa4-ab1d-be7ab556ce60\") " pod="calico-system/calico-typha-559456cc49-sz6bl" Feb 9 18:35:22.852380 kubelet[2169]: I0209 18:35:22.852369 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/27a77f3e-a849-4edc-be13-95759836676d-cni-bin-dir\") pod \"calico-node-22rnx\" (UID: \"27a77f3e-a849-4edc-be13-95759836676d\") " pod="calico-system/calico-node-22rnx" Feb 9 18:35:22.852506 kubelet[2169]: I0209 18:35:22.852496 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27a77f3e-a849-4edc-be13-95759836676d-lib-modules\") pod \"calico-node-22rnx\" (UID: \"27a77f3e-a849-4edc-be13-95759836676d\") " pod="calico-system/calico-node-22rnx" Feb 9 18:35:22.852631 kubelet[2169]: I0209 18:35:22.852621 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/27a77f3e-a849-4edc-be13-95759836676d-node-certs\") pod \"calico-node-22rnx\" (UID: \"27a77f3e-a849-4edc-be13-95759836676d\") " pod="calico-system/calico-node-22rnx" Feb 9 18:35:22.852767 kubelet[2169]: I0209 18:35:22.852750 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/27a77f3e-a849-4edc-be13-95759836676d-xtables-lock\") pod \"calico-node-22rnx\" (UID: \"27a77f3e-a849-4edc-be13-95759836676d\") " pod="calico-system/calico-node-22rnx" Feb 9 18:35:22.852908 kubelet[2169]: I0209 18:35:22.852887 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/27a77f3e-a849-4edc-be13-95759836676d-var-lib-calico\") pod \"calico-node-22rnx\" (UID: \"27a77f3e-a849-4edc-be13-95759836676d\") " pod="calico-system/calico-node-22rnx" Feb 9 18:35:22.853027 kubelet[2169]: I0209 18:35:22.853016 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9a010e29-dc62-4fa4-ab1d-be7ab556ce60-typha-certs\") pod \"calico-typha-559456cc49-sz6bl\" (UID: \"9a010e29-dc62-4fa4-ab1d-be7ab556ce60\") " pod="calico-system/calico-typha-559456cc49-sz6bl" Feb 9 18:35:22.853198 kubelet[2169]: I0209 18:35:22.853185 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27a77f3e-a849-4edc-be13-95759836676d-tigera-ca-bundle\") pod \"calico-node-22rnx\" (UID: \"27a77f3e-a849-4edc-be13-95759836676d\") " pod="calico-system/calico-node-22rnx" Feb 9 18:35:22.853323 kubelet[2169]: I0209 18:35:22.853312 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/27a77f3e-a849-4edc-be13-95759836676d-policysync\") pod \"calico-node-22rnx\" (UID: \"27a77f3e-a849-4edc-be13-95759836676d\") " pod="calico-system/calico-node-22rnx" Feb 9 18:35:22.854797 kubelet[2169]: I0209 18:35:22.854442 2169 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:35:22.854797 kubelet[2169]: E0209 18:35:22.854749 2169 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tbbzb" podUID=85c81bc7-ead8-41f3-b0f6-13db63c2997b Feb 9 18:35:22.953787 kubelet[2169]: I0209 18:35:22.953746 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/85c81bc7-ead8-41f3-b0f6-13db63c2997b-registration-dir\") pod \"csi-node-driver-tbbzb\" (UID: \"85c81bc7-ead8-41f3-b0f6-13db63c2997b\") " pod="calico-system/csi-node-driver-tbbzb" Feb 9 18:35:22.954385 kubelet[2169]: E0209 18:35:22.954362 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.954385 kubelet[2169]: W0209 18:35:22.954381 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.954500 kubelet[2169]: E0209 18:35:22.954413 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.954585 kubelet[2169]: E0209 18:35:22.954571 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.954585 kubelet[2169]: W0209 18:35:22.954582 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.954673 kubelet[2169]: E0209 18:35:22.954597 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.954780 kubelet[2169]: E0209 18:35:22.954765 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.954780 kubelet[2169]: W0209 18:35:22.954776 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.954851 kubelet[2169]: E0209 18:35:22.954787 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.954980 kubelet[2169]: E0209 18:35:22.954965 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.954980 kubelet[2169]: W0209 18:35:22.954976 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.955057 kubelet[2169]: E0209 18:35:22.954992 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.955191 kubelet[2169]: E0209 18:35:22.955172 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.955191 kubelet[2169]: W0209 18:35:22.955189 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.955280 kubelet[2169]: E0209 18:35:22.955199 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.955382 kubelet[2169]: E0209 18:35:22.955351 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.955382 kubelet[2169]: W0209 18:35:22.955373 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.955382 kubelet[2169]: E0209 18:35:22.955384 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.955540 kubelet[2169]: E0209 18:35:22.955525 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.955540 kubelet[2169]: W0209 18:35:22.955540 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.955618 kubelet[2169]: E0209 18:35:22.955555 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.955618 kubelet[2169]: I0209 18:35:22.955575 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/85c81bc7-ead8-41f3-b0f6-13db63c2997b-varrun\") pod \"csi-node-driver-tbbzb\" (UID: \"85c81bc7-ead8-41f3-b0f6-13db63c2997b\") " pod="calico-system/csi-node-driver-tbbzb" Feb 9 18:35:22.955719 kubelet[2169]: E0209 18:35:22.955704 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.955719 kubelet[2169]: W0209 18:35:22.955719 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.955791 kubelet[2169]: E0209 18:35:22.955737 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.955791 kubelet[2169]: I0209 18:35:22.955755 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/85c81bc7-ead8-41f3-b0f6-13db63c2997b-socket-dir\") pod \"csi-node-driver-tbbzb\" (UID: \"85c81bc7-ead8-41f3-b0f6-13db63c2997b\") " pod="calico-system/csi-node-driver-tbbzb" Feb 9 18:35:22.955907 kubelet[2169]: E0209 18:35:22.955896 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.955951 kubelet[2169]: W0209 18:35:22.955908 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.955951 kubelet[2169]: E0209 18:35:22.955923 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.956088 kubelet[2169]: E0209 18:35:22.956058 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.956088 kubelet[2169]: W0209 18:35:22.956075 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.956088 kubelet[2169]: E0209 18:35:22.956085 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.956271 kubelet[2169]: E0209 18:35:22.956260 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.956271 kubelet[2169]: W0209 18:35:22.956270 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.956346 kubelet[2169]: E0209 18:35:22.956284 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.956480 kubelet[2169]: E0209 18:35:22.956466 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.956480 kubelet[2169]: W0209 18:35:22.956478 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.956557 kubelet[2169]: E0209 18:35:22.956488 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.956797 kubelet[2169]: E0209 18:35:22.956782 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.956887 kubelet[2169]: W0209 18:35:22.956874 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.956970 kubelet[2169]: E0209 18:35:22.956960 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.957222 kubelet[2169]: E0209 18:35:22.957210 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.957309 kubelet[2169]: W0209 18:35:22.957297 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.957468 kubelet[2169]: E0209 18:35:22.957456 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.957620 kubelet[2169]: E0209 18:35:22.957591 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.957701 kubelet[2169]: W0209 18:35:22.957688 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.957781 kubelet[2169]: E0209 18:35:22.957772 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.958014 kubelet[2169]: E0209 18:35:22.958002 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.958106 kubelet[2169]: W0209 18:35:22.958093 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.958182 kubelet[2169]: E0209 18:35:22.958172 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.958500 kubelet[2169]: E0209 18:35:22.958488 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.958613 kubelet[2169]: W0209 18:35:22.958600 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.958698 kubelet[2169]: E0209 18:35:22.958688 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.958906 kubelet[2169]: E0209 18:35:22.958891 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.958961 kubelet[2169]: W0209 18:35:22.958906 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.958961 kubelet[2169]: E0209 18:35:22.958920 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.959457 kubelet[2169]: E0209 18:35:22.959442 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.959457 kubelet[2169]: W0209 18:35:22.959457 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.959557 kubelet[2169]: E0209 18:35:22.959479 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.959703 kubelet[2169]: E0209 18:35:22.959690 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.959703 kubelet[2169]: W0209 18:35:22.959702 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.959785 kubelet[2169]: E0209 18:35:22.959718 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.959886 kubelet[2169]: E0209 18:35:22.959869 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.959886 kubelet[2169]: W0209 18:35:22.959882 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.959947 kubelet[2169]: E0209 18:35:22.959892 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.960030 kubelet[2169]: E0209 18:35:22.960021 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.960030 kubelet[2169]: W0209 18:35:22.960030 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.960085 kubelet[2169]: E0209 18:35:22.960046 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.960263 kubelet[2169]: E0209 18:35:22.960202 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.960263 kubelet[2169]: W0209 18:35:22.960212 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.960402 kubelet[2169]: E0209 18:35:22.960347 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.960402 kubelet[2169]: W0209 18:35:22.960378 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.960402 kubelet[2169]: E0209 18:35:22.960390 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.960525 kubelet[2169]: E0209 18:35:22.960501 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.961150 kubelet[2169]: E0209 18:35:22.960522 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.961234 kubelet[2169]: W0209 18:35:22.961220 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.961301 kubelet[2169]: E0209 18:35:22.961290 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.961609 kubelet[2169]: E0209 18:35:22.961595 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.961684 kubelet[2169]: W0209 18:35:22.961672 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.961761 kubelet[2169]: E0209 18:35:22.961751 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.961851 kubelet[2169]: I0209 18:35:22.961831 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/85c81bc7-ead8-41f3-b0f6-13db63c2997b-kubelet-dir\") pod \"csi-node-driver-tbbzb\" (UID: \"85c81bc7-ead8-41f3-b0f6-13db63c2997b\") " pod="calico-system/csi-node-driver-tbbzb" Feb 9 18:35:22.962100 kubelet[2169]: E0209 18:35:22.962087 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.962184 kubelet[2169]: W0209 18:35:22.962169 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.962242 kubelet[2169]: E0209 18:35:22.962233 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.962468 kubelet[2169]: E0209 18:35:22.962454 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.962598 kubelet[2169]: W0209 18:35:22.962543 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.962717 kubelet[2169]: E0209 18:35:22.962695 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.962896 kubelet[2169]: E0209 18:35:22.962884 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.962967 kubelet[2169]: W0209 18:35:22.962954 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.963065 kubelet[2169]: E0209 18:35:22.963047 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.963219 kubelet[2169]: E0209 18:35:22.963207 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.963286 kubelet[2169]: W0209 18:35:22.963273 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.963348 kubelet[2169]: E0209 18:35:22.963340 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.963596 kubelet[2169]: E0209 18:35:22.963584 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.963717 kubelet[2169]: W0209 18:35:22.963704 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.963784 kubelet[2169]: E0209 18:35:22.963774 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.963999 kubelet[2169]: E0209 18:35:22.963988 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.964073 kubelet[2169]: W0209 18:35:22.964062 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.964146 kubelet[2169]: E0209 18:35:22.964135 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.964389 kubelet[2169]: E0209 18:35:22.964373 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.964389 kubelet[2169]: W0209 18:35:22.964388 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.964493 kubelet[2169]: E0209 18:35:22.964428 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.964600 kubelet[2169]: E0209 18:35:22.964582 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.964600 kubelet[2169]: W0209 18:35:22.964594 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.964663 kubelet[2169]: E0209 18:35:22.964604 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.964752 kubelet[2169]: E0209 18:35:22.964742 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.964752 kubelet[2169]: W0209 18:35:22.964752 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.964832 kubelet[2169]: E0209 18:35:22.964763 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.964916 kubelet[2169]: E0209 18:35:22.964903 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.964916 kubelet[2169]: W0209 18:35:22.964914 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.964991 kubelet[2169]: E0209 18:35:22.964930 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.965187 kubelet[2169]: E0209 18:35:22.965175 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.965225 kubelet[2169]: W0209 18:35:22.965188 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.965225 kubelet[2169]: E0209 18:35:22.965205 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.966570 kubelet[2169]: E0209 18:35:22.965563 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.966570 kubelet[2169]: W0209 18:35:22.965578 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.966570 kubelet[2169]: E0209 18:35:22.965760 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.966570 kubelet[2169]: W0209 18:35:22.965769 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.966570 kubelet[2169]: E0209 18:35:22.965915 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.966570 kubelet[2169]: W0209 18:35:22.965923 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.966570 kubelet[2169]: E0209 18:35:22.965934 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.966570 kubelet[2169]: E0209 18:35:22.966071 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.966570 kubelet[2169]: W0209 18:35:22.966078 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.966570 kubelet[2169]: E0209 18:35:22.966088 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.966570 kubelet[2169]: E0209 18:35:22.966236 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.966992 kubelet[2169]: W0209 18:35:22.966243 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.966992 kubelet[2169]: E0209 18:35:22.966270 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.966992 kubelet[2169]: E0209 18:35:22.966444 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.966992 kubelet[2169]: W0209 18:35:22.966452 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.966992 kubelet[2169]: E0209 18:35:22.966462 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.967161 kubelet[2169]: E0209 18:35:22.967143 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.967681 kubelet[2169]: E0209 18:35:22.967162 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.967799 kubelet[2169]: W0209 18:35:22.967782 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.967882 kubelet[2169]: E0209 18:35:22.967872 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.968037 kubelet[2169]: E0209 18:35:22.967175 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.968177 kubelet[2169]: E0209 18:35:22.968144 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.968286 kubelet[2169]: W0209 18:35:22.968233 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.968376 kubelet[2169]: E0209 18:35:22.968351 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.968641 kubelet[2169]: E0209 18:35:22.968629 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.968708 kubelet[2169]: W0209 18:35:22.968696 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.968818 kubelet[2169]: E0209 18:35:22.968808 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.969079 kubelet[2169]: E0209 18:35:22.969066 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.969171 kubelet[2169]: W0209 18:35:22.969138 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.969241 kubelet[2169]: E0209 18:35:22.969232 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.969517 kubelet[2169]: E0209 18:35:22.969502 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.969604 kubelet[2169]: W0209 18:35:22.969591 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.969669 kubelet[2169]: E0209 18:35:22.969660 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.969827 kubelet[2169]: I0209 18:35:22.969808 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs7vf\" (UniqueName: \"kubernetes.io/projected/85c81bc7-ead8-41f3-b0f6-13db63c2997b-kube-api-access-fs7vf\") pod \"csi-node-driver-tbbzb\" (UID: \"85c81bc7-ead8-41f3-b0f6-13db63c2997b\") " pod="calico-system/csi-node-driver-tbbzb" Feb 9 18:35:22.969975 kubelet[2169]: E0209 18:35:22.969955 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.970036 kubelet[2169]: W0209 18:35:22.970024 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.970107 kubelet[2169]: E0209 18:35:22.970097 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.970797 kubelet[2169]: E0209 18:35:22.970779 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.970915 kubelet[2169]: W0209 18:35:22.970899 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.970982 kubelet[2169]: E0209 18:35:22.970970 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.974268 kubelet[2169]: E0209 18:35:22.974250 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.974490 kubelet[2169]: W0209 18:35:22.974471 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.974594 kubelet[2169]: E0209 18:35:22.974583 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.975141 kubelet[2169]: E0209 18:35:22.975124 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.975247 kubelet[2169]: W0209 18:35:22.975232 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.975314 kubelet[2169]: E0209 18:35:22.975303 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.975610 kubelet[2169]: E0209 18:35:22.975595 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.975687 kubelet[2169]: W0209 18:35:22.975675 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.975762 kubelet[2169]: E0209 18:35:22.975752 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.975987 kubelet[2169]: E0209 18:35:22.975975 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.976070 kubelet[2169]: W0209 18:35:22.976056 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.976136 kubelet[2169]: E0209 18:35:22.976127 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.976331 kubelet[2169]: E0209 18:35:22.976318 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.976441 kubelet[2169]: W0209 18:35:22.976426 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.976510 kubelet[2169]: E0209 18:35:22.976500 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.976716 kubelet[2169]: E0209 18:35:22.976696 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.976799 kubelet[2169]: W0209 18:35:22.976787 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.976870 kubelet[2169]: E0209 18:35:22.976860 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.977079 kubelet[2169]: E0209 18:35:22.977064 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.977157 kubelet[2169]: W0209 18:35:22.977145 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.977216 kubelet[2169]: E0209 18:35:22.977208 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.977434 kubelet[2169]: E0209 18:35:22.977421 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.977518 kubelet[2169]: W0209 18:35:22.977505 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.977583 kubelet[2169]: E0209 18:35:22.977574 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.977788 kubelet[2169]: E0209 18:35:22.977774 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.977867 kubelet[2169]: W0209 18:35:22.977855 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.980459 kubelet[2169]: E0209 18:35:22.980437 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.980719 kubelet[2169]: E0209 18:35:22.980704 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.980719 kubelet[2169]: W0209 18:35:22.980718 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.980853 kubelet[2169]: E0209 18:35:22.980830 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.980920 kubelet[2169]: E0209 18:35:22.980880 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.980978 kubelet[2169]: W0209 18:35:22.980967 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.981119 kubelet[2169]: E0209 18:35:22.981108 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.981252 kubelet[2169]: E0209 18:35:22.981242 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.981324 kubelet[2169]: W0209 18:35:22.981312 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.981497 kubelet[2169]: E0209 18:35:22.981474 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.982011 kubelet[2169]: E0209 18:35:22.981994 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.982094 kubelet[2169]: W0209 18:35:22.982081 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.982215 kubelet[2169]: E0209 18:35:22.982195 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.982330 kubelet[2169]: E0209 18:35:22.982319 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.982443 kubelet[2169]: W0209 18:35:22.982430 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.982592 kubelet[2169]: E0209 18:35:22.982572 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.982833 kubelet[2169]: E0209 18:35:22.982820 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.982927 kubelet[2169]: W0209 18:35:22.982914 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.983103 kubelet[2169]: E0209 18:35:22.983091 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.983249 kubelet[2169]: E0209 18:35:22.983239 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.983411 kubelet[2169]: W0209 18:35:22.983397 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.983531 kubelet[2169]: E0209 18:35:22.983508 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.983731 kubelet[2169]: E0209 18:35:22.983718 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.983796 kubelet[2169]: W0209 18:35:22.983784 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.983884 kubelet[2169]: E0209 18:35:22.983872 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.985201 kubelet[2169]: E0209 18:35:22.985185 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.985302 kubelet[2169]: W0209 18:35:22.985289 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.986376 kubelet[2169]: E0209 18:35:22.985381 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.986503 kubelet[2169]: E0209 18:35:22.986218 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.986580 kubelet[2169]: W0209 18:35:22.986565 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.986664 kubelet[2169]: E0209 18:35:22.986655 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:22.994793 kubelet[2169]: E0209 18:35:22.994768 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:22.994918 kubelet[2169]: W0209 18:35:22.994903 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:22.995030 kubelet[2169]: E0209 18:35:22.994991 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:23.083031 kubelet[2169]: E0209 18:35:23.083003 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:23.083031 kubelet[2169]: W0209 18:35:23.083025 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:23.083212 kubelet[2169]: E0209 18:35:23.083049 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:23.083284 kubelet[2169]: E0209 18:35:23.083268 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:23.083284 kubelet[2169]: W0209 18:35:23.083283 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:23.083364 kubelet[2169]: E0209 18:35:23.083301 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:23.083530 kubelet[2169]: E0209 18:35:23.083516 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:23.083530 kubelet[2169]: W0209 18:35:23.083530 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:23.083602 kubelet[2169]: E0209 18:35:23.083547 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:23.083754 kubelet[2169]: E0209 18:35:23.083741 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:23.083754 kubelet[2169]: W0209 18:35:23.083753 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:23.083829 kubelet[2169]: E0209 18:35:23.083769 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:23.084280 kubelet[2169]: E0209 18:35:23.084267 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:23.084313 kubelet[2169]: W0209 18:35:23.084280 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:23.084313 kubelet[2169]: E0209 18:35:23.084299 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:23.084520 kubelet[2169]: E0209 18:35:23.084506 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:23.084520 kubelet[2169]: W0209 18:35:23.084518 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:23.084609 kubelet[2169]: E0209 18:35:23.084589 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:23.084687 kubelet[2169]: E0209 18:35:23.084674 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:23.084687 kubelet[2169]: W0209 18:35:23.084687 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:23.084739 kubelet[2169]: E0209 18:35:23.084713 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:23.086345 kubelet[2169]: E0209 18:35:23.085421 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:23.086345 kubelet[2169]: W0209 18:35:23.085438 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:23.086345 kubelet[2169]: E0209 18:35:23.085475 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:23.086345 kubelet[2169]: E0209 18:35:23.085603 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:23.086345 kubelet[2169]: W0209 18:35:23.085610 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:23.086345 kubelet[2169]: E0209 18:35:23.085637 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:23.086345 kubelet[2169]: E0209 18:35:23.085754 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:23.086345 kubelet[2169]: W0209 18:35:23.085760 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:23.086345 kubelet[2169]: E0209 18:35:23.085783 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:23.086345 kubelet[2169]: E0209 18:35:23.085899 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:23.086638 kubelet[2169]: W0209 18:35:23.085907 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:23.086638 kubelet[2169]: E0209 18:35:23.085931 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:23.086638 kubelet[2169]: E0209 18:35:23.086052 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:23.086638 kubelet[2169]: W0209 18:35:23.086060 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:23.086638 kubelet[2169]: E0209 18:35:23.086094 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:23.086638 kubelet[2169]: E0209 18:35:23.086184 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:23.086638 kubelet[2169]: W0209 18:35:23.086190 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:23.086638 kubelet[2169]: E0209 18:35:23.086203 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:23.086638 kubelet[2169]: E0209 18:35:23.086344 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:23.086638 kubelet[2169]: W0209 18:35:23.086363 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:23.086868 kubelet[2169]: E0209 18:35:23.086374 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:23.086868 kubelet[2169]: E0209 18:35:23.086665 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:23.086868 kubelet[2169]: W0209 18:35:23.086676 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:23.086868 kubelet[2169]: E0209 18:35:23.086692 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:23.086956 kubelet[2169]: E0209 18:35:23.086876 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:23.086956 kubelet[2169]: W0209 18:35:23.086885 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:23.086956 kubelet[2169]: E0209 18:35:23.086898 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:23.091153 kubelet[2169]: E0209 18:35:23.091122 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:23.091153 kubelet[2169]: W0209 18:35:23.091140 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:23.091246 kubelet[2169]: E0209 18:35:23.091225 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:23.091560 kubelet[2169]: E0209 18:35:23.091540 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:23.091560 kubelet[2169]: W0209 18:35:23.091555 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:23.091647 kubelet[2169]: E0209 18:35:23.091602 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:23.091785 kubelet[2169]: E0209 18:35:23.091766 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:23.091819 kubelet[2169]: W0209 18:35:23.091786 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:23.091855 kubelet[2169]: E0209 18:35:23.091835 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:23.096182 kubelet[2169]: E0209 18:35:23.096166 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:23.096182 kubelet[2169]: W0209 18:35:23.096180 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:23.096382 kubelet[2169]: E0209 18:35:23.096351 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:23.096561 kubelet[2169]: E0209 18:35:23.096548 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:23.096597 kubelet[2169]: W0209 18:35:23.096562 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:23.096707 kubelet[2169]: E0209 18:35:23.096693 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:23.098780 kubelet[2169]: E0209 18:35:23.097577 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:23.098780 kubelet[2169]: W0209 18:35:23.097669 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:23.098780 kubelet[2169]: E0209 18:35:23.098127 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:23.098780 kubelet[2169]: E0209 18:35:23.098586 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:23.098780 kubelet[2169]: W0209 18:35:23.098596 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:23.098780 kubelet[2169]: E0209 18:35:23.098686 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:23.098979 kubelet[2169]: E0209 18:35:23.098883 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:23.098979 kubelet[2169]: W0209 18:35:23.098891 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:23.098979 kubelet[2169]: E0209 18:35:23.098971 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:23.099095 kubelet[2169]: E0209 18:35:23.099078 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:23.099095 kubelet[2169]: W0209 18:35:23.099091 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:23.099158 kubelet[2169]: E0209 18:35:23.099108 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:23.099457 kubelet[2169]: E0209 18:35:23.099276 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:23.099457 kubelet[2169]: W0209 18:35:23.099293 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:23.099457 kubelet[2169]: E0209 18:35:23.099306 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:23.099580 kubelet[2169]: E0209 18:35:23.099511 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:23.099580 kubelet[2169]: W0209 18:35:23.099521 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:23.099580 kubelet[2169]: E0209 18:35:23.099534 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:23.112332 kubelet[2169]: E0209 18:35:23.112252 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:23.112332 kubelet[2169]: W0209 18:35:23.112273 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:23.112332 kubelet[2169]: E0209 18:35:23.112293 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:23.189468 kubelet[2169]: E0209 18:35:23.189433 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:23.189468 kubelet[2169]: W0209 18:35:23.189455 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:23.189468 kubelet[2169]: E0209 18:35:23.189477 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:23.189720 kubelet[2169]: E0209 18:35:23.189700 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:23.189720 kubelet[2169]: W0209 18:35:23.189709 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:23.189720 kubelet[2169]: E0209 18:35:23.189720 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:23.290861 kubelet[2169]: E0209 18:35:23.290811 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:23.290861 kubelet[2169]: W0209 18:35:23.290832 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:23.290861 kubelet[2169]: E0209 18:35:23.290860 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:23.291054 kubelet[2169]: E0209 18:35:23.291025 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:23.291054 kubelet[2169]: W0209 18:35:23.291033 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:23.291054 kubelet[2169]: E0209 18:35:23.291043 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:23.313363 kubelet[2169]: E0209 18:35:23.313329 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:23.313363 kubelet[2169]: W0209 18:35:23.313346 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:23.313505 kubelet[2169]: E0209 18:35:23.313378 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:23.344542 kubelet[2169]: E0209 18:35:23.344516 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:23.345225 env[1223]: time="2024-02-09T18:35:23.345187874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-22rnx,Uid:27a77f3e-a849-4edc-be13-95759836676d,Namespace:calico-system,Attempt:0,}" Feb 9 18:35:23.391975 kubelet[2169]: E0209 18:35:23.391883 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:23.391975 kubelet[2169]: W0209 18:35:23.391903 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:23.391975 kubelet[2169]: E0209 18:35:23.391927 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:23.415404 env[1223]: time="2024-02-09T18:35:23.415268043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:35:23.416819 env[1223]: time="2024-02-09T18:35:23.416390224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:35:23.416819 env[1223]: time="2024-02-09T18:35:23.416412744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:35:23.417303 env[1223]: time="2024-02-09T18:35:23.417173718Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/750060e68eb17c7267351bfe35114c15f22e01f8a5f21874bc2b2b2f59992d7e pid=2726 runtime=io.containerd.runc.v2 Feb 9 18:35:23.467515 env[1223]: time="2024-02-09T18:35:23.467477123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-22rnx,Uid:27a77f3e-a849-4edc-be13-95759836676d,Namespace:calico-system,Attempt:0,} returns sandbox id \"750060e68eb17c7267351bfe35114c15f22e01f8a5f21874bc2b2b2f59992d7e\"" Feb 9 18:35:23.468345 kubelet[2169]: E0209 18:35:23.468312 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:23.469805 env[1223]: time="2024-02-09T18:35:23.469778886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\"" Feb 9 18:35:23.492921 kubelet[2169]: E0209 18:35:23.492896 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:23.492921 kubelet[2169]: W0209 18:35:23.492918 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:23.493113 kubelet[2169]: E0209 18:35:23.492938 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:23.509915 kubelet[2169]: E0209 18:35:23.509897 2169 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:23.509915 kubelet[2169]: W0209 18:35:23.509912 2169 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:23.510079 kubelet[2169]: E0209 18:35:23.509930 2169 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:23.599045 kubelet[2169]: E0209 18:35:23.598802 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:23.600375 env[1223]: time="2024-02-09T18:35:23.599579112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-559456cc49-sz6bl,Uid:9a010e29-dc62-4fa4-ab1d-be7ab556ce60,Namespace:calico-system,Attempt:0,}" Feb 9 18:35:23.613307 env[1223]: time="2024-02-09T18:35:23.613246284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:35:23.613493 env[1223]: time="2024-02-09T18:35:23.613455368Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:35:23.613493 env[1223]: time="2024-02-09T18:35:23.613474408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:35:23.614274 env[1223]: time="2024-02-09T18:35:23.613957737Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/96fdf06126ab3cb3296488c91bd31a4e355fe582fc5bea12445a7411f47e2c29 pid=2770 runtime=io.containerd.runc.v2 Feb 9 18:35:23.679116 env[1223]: time="2024-02-09T18:35:23.679007133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-559456cc49-sz6bl,Uid:9a010e29-dc62-4fa4-ab1d-be7ab556ce60,Namespace:calico-system,Attempt:0,} returns sandbox id \"96fdf06126ab3cb3296488c91bd31a4e355fe582fc5bea12445a7411f47e2c29\"" Feb 9 18:35:23.681011 kubelet[2169]: E0209 18:35:23.680581 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:23.691000 audit[2829]: NETFILTER_CFG table=filter:107 family=2 entries=14 op=nft_register_rule pid=2829 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:35:23.691000 audit[2829]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=ffffe29d1560 a2=0 a3=ffff9eeff6c0 items=0 ppid=2354 pid=2829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:23.696714 kernel: audit: type=1325 audit(1707503723.691:274): table=filter:107 family=2 entries=14 op=nft_register_rule pid=2829 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:35:23.696771 kernel: audit: type=1300 audit(1707503723.691:274): arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=ffffe29d1560 a2=0 a3=ffff9eeff6c0 items=0 ppid=2354 pid=2829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:23.696792 kernel: audit: type=1327 audit(1707503723.691:274): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:23.691000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:23.691000 audit[2829]: NETFILTER_CFG table=nat:108 family=2 entries=20 op=nft_register_rule pid=2829 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:35:23.691000 audit[2829]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffe29d1560 a2=0 a3=ffff9eeff6c0 items=0 ppid=2354 pid=2829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:23.708574 kernel: audit: type=1325 audit(1707503723.691:275): table=nat:108 family=2 entries=20 op=nft_register_rule pid=2829 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:35:23.708623 kernel: audit: type=1300 audit(1707503723.691:275): arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffe29d1560 a2=0 a3=ffff9eeff6c0 items=0 ppid=2354 pid=2829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:23.708646 kernel: audit: type=1327 audit(1707503723.691:275): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:23.691000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:24.618342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1603702879.mount: Deactivated successfully. Feb 9 18:35:24.703462 env[1223]: time="2024-02-09T18:35:24.703412800Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:24.704857 env[1223]: time="2024-02-09T18:35:24.704824864Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbddd33ed55a4a5c129e8f09945d426860425b9778d9402efe7bcefea7990a57,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:24.706003 env[1223]: time="2024-02-09T18:35:24.705973725Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:24.707205 env[1223]: time="2024-02-09T18:35:24.707169986Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b05edbd1f80db4ada229e6001a666a7dd36bb6ab617143684fb3d28abfc4b71e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:24.707688 env[1223]: time="2024-02-09T18:35:24.707659274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\" returns image reference \"sha256:cbddd33ed55a4a5c129e8f09945d426860425b9778d9402efe7bcefea7990a57\"" Feb 9 18:35:24.708487 env[1223]: time="2024-02-09T18:35:24.708463449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.27.0\"" Feb 9 18:35:24.709968 env[1223]: time="2024-02-09T18:35:24.709736351Z" level=info msg="CreateContainer within sandbox \"750060e68eb17c7267351bfe35114c15f22e01f8a5f21874bc2b2b2f59992d7e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 9 18:35:24.719730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2588618477.mount: Deactivated successfully. Feb 9 18:35:24.722913 env[1223]: time="2024-02-09T18:35:24.722874543Z" level=info msg="CreateContainer within sandbox \"750060e68eb17c7267351bfe35114c15f22e01f8a5f21874bc2b2b2f59992d7e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b17fb90e864307b16eef447825714169de5216890b49fbaa4efc53bbb6b0890f\"" Feb 9 18:35:24.724249 env[1223]: time="2024-02-09T18:35:24.724227647Z" level=info msg="StartContainer for \"b17fb90e864307b16eef447825714169de5216890b49fbaa4efc53bbb6b0890f\"" Feb 9 18:35:24.787400 env[1223]: time="2024-02-09T18:35:24.787347679Z" level=info msg="StartContainer for \"b17fb90e864307b16eef447825714169de5216890b49fbaa4efc53bbb6b0890f\" returns successfully" Feb 9 18:35:24.822540 env[1223]: time="2024-02-09T18:35:24.822496379Z" level=info msg="shim disconnected" id=b17fb90e864307b16eef447825714169de5216890b49fbaa4efc53bbb6b0890f Feb 9 18:35:24.822540 env[1223]: time="2024-02-09T18:35:24.822539780Z" level=warning msg="cleaning up after shim disconnected" id=b17fb90e864307b16eef447825714169de5216890b49fbaa4efc53bbb6b0890f namespace=k8s.io Feb 9 18:35:24.822540 env[1223]: time="2024-02-09T18:35:24.822549740Z" level=info msg="cleaning up dead shim" Feb 9 18:35:24.829158 env[1223]: time="2024-02-09T18:35:24.829125336Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:35:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2875 runtime=io.containerd.runc.v2\n" Feb 9 18:35:25.087791 kubelet[2169]: E0209 18:35:25.087754 2169 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tbbzb" podUID=85c81bc7-ead8-41f3-b0f6-13db63c2997b Feb 9 18:35:25.137817 kubelet[2169]: E0209 18:35:25.137692 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:25.798818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2690831614.mount: Deactivated successfully. Feb 9 18:35:26.492346 env[1223]: time="2024-02-09T18:35:26.492299515Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:26.493712 env[1223]: time="2024-02-09T18:35:26.493683178Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fba96c9caf161e105c76b559b06b4b2337b89b54833d69984209161d93145969,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:26.494920 env[1223]: time="2024-02-09T18:35:26.494888438Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:26.496482 env[1223]: time="2024-02-09T18:35:26.496449943Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:5f2d3b8c354a4eb6de46e786889913916e620c6c256982fb8d0f1a1d36a282bc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:26.497025 env[1223]: time="2024-02-09T18:35:26.496997032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.27.0\" returns image reference \"sha256:fba96c9caf161e105c76b559b06b4b2337b89b54833d69984209161d93145969\"" Feb 9 18:35:26.498273 env[1223]: time="2024-02-09T18:35:26.498243772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\"" Feb 9 18:35:26.509720 env[1223]: time="2024-02-09T18:35:26.509691598Z" level=info msg="CreateContainer within sandbox \"96fdf06126ab3cb3296488c91bd31a4e355fe582fc5bea12445a7411f47e2c29\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 9 18:35:26.518967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount893736270.mount: Deactivated successfully. Feb 9 18:35:26.527128 env[1223]: time="2024-02-09T18:35:26.527082681Z" level=info msg="CreateContainer within sandbox \"96fdf06126ab3cb3296488c91bd31a4e355fe582fc5bea12445a7411f47e2c29\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0ec201c8038e359e7dbc758c6f36d4d1655c3f9fa29fafad2e79fe7cb297234a\"" Feb 9 18:35:26.527540 env[1223]: time="2024-02-09T18:35:26.527514728Z" level=info msg="StartContainer for \"0ec201c8038e359e7dbc758c6f36d4d1655c3f9fa29fafad2e79fe7cb297234a\"" Feb 9 18:35:26.606492 env[1223]: time="2024-02-09T18:35:26.604705703Z" level=info msg="StartContainer for \"0ec201c8038e359e7dbc758c6f36d4d1655c3f9fa29fafad2e79fe7cb297234a\" returns successfully" Feb 9 18:35:27.087765 kubelet[2169]: E0209 18:35:27.087732 2169 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tbbzb" podUID=85c81bc7-ead8-41f3-b0f6-13db63c2997b Feb 9 18:35:27.141953 kubelet[2169]: E0209 18:35:27.141662 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:27.150432 kubelet[2169]: I0209 18:35:27.150090 2169 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-559456cc49-sz6bl" podStartSLOduration=-9.22337203170472e+09 pod.CreationTimestamp="2024-02-09 18:35:22 +0000 UTC" firstStartedPulling="2024-02-09 18:35:23.681265815 +0000 UTC m=+20.731471262" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:35:27.149430905 +0000 UTC m=+24.199636392" watchObservedRunningTime="2024-02-09 18:35:27.150057195 +0000 UTC m=+24.200262642" Feb 9 18:35:27.645262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3532111414.mount: Deactivated successfully. Feb 9 18:35:28.146224 kubelet[2169]: I0209 18:35:28.145558 2169 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 9 18:35:28.146224 kubelet[2169]: E0209 18:35:28.146146 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:29.087973 kubelet[2169]: E0209 18:35:29.087153 2169 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tbbzb" podUID=85c81bc7-ead8-41f3-b0f6-13db63c2997b Feb 9 18:35:30.086082 env[1223]: time="2024-02-09T18:35:30.086030990Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:30.087430 env[1223]: time="2024-02-09T18:35:30.087394929Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9c9318f5fbf505fc3d84676966009a3887e58ea1e3eac10039e5a96dfceb254b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:30.089120 env[1223]: time="2024-02-09T18:35:30.089089633Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:30.090840 env[1223]: time="2024-02-09T18:35:30.090802856Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:d943b4c23e82a39b0186a1a3b2fe8f728e543d503df72d7be521501a82b7e7b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:30.091371 env[1223]: time="2024-02-09T18:35:30.091331064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\" returns image reference \"sha256:9c9318f5fbf505fc3d84676966009a3887e58ea1e3eac10039e5a96dfceb254b\"" Feb 9 18:35:30.093161 env[1223]: time="2024-02-09T18:35:30.093104289Z" level=info msg="CreateContainer within sandbox \"750060e68eb17c7267351bfe35114c15f22e01f8a5f21874bc2b2b2f59992d7e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 9 18:35:30.108135 env[1223]: time="2024-02-09T18:35:30.108092058Z" level=info msg="CreateContainer within sandbox \"750060e68eb17c7267351bfe35114c15f22e01f8a5f21874bc2b2b2f59992d7e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d755c1e444d77967eed82eb292698efce60ed7e72438c4d6882f4546b86a7b7a\"" Feb 9 18:35:30.108574 env[1223]: time="2024-02-09T18:35:30.108548505Z" level=info msg="StartContainer for \"d755c1e444d77967eed82eb292698efce60ed7e72438c4d6882f4546b86a7b7a\"" Feb 9 18:35:30.202297 env[1223]: time="2024-02-09T18:35:30.201665447Z" level=info msg="StartContainer for \"d755c1e444d77967eed82eb292698efce60ed7e72438c4d6882f4546b86a7b7a\" returns successfully" Feb 9 18:35:30.791646 env[1223]: time="2024-02-09T18:35:30.791589016Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 18:35:30.811985 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d755c1e444d77967eed82eb292698efce60ed7e72438c4d6882f4546b86a7b7a-rootfs.mount: Deactivated successfully. Feb 9 18:35:30.899371 kubelet[2169]: I0209 18:35:30.899306 2169 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 18:35:30.902181 env[1223]: time="2024-02-09T18:35:30.902141402Z" level=info msg="shim disconnected" id=d755c1e444d77967eed82eb292698efce60ed7e72438c4d6882f4546b86a7b7a Feb 9 18:35:30.902341 env[1223]: time="2024-02-09T18:35:30.902323164Z" level=warning msg="cleaning up after shim disconnected" id=d755c1e444d77967eed82eb292698efce60ed7e72438c4d6882f4546b86a7b7a namespace=k8s.io Feb 9 18:35:30.902438 env[1223]: time="2024-02-09T18:35:30.902422846Z" level=info msg="cleaning up dead shim" Feb 9 18:35:30.911497 env[1223]: time="2024-02-09T18:35:30.911429812Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:35:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2993 runtime=io.containerd.runc.v2\n" Feb 9 18:35:30.921982 kubelet[2169]: I0209 18:35:30.919661 2169 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:35:30.921982 kubelet[2169]: I0209 18:35:30.919916 2169 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:35:30.926676 kubelet[2169]: I0209 18:35:30.926644 2169 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:35:31.045364 kubelet[2169]: I0209 18:35:31.045243 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/503ecc7d-dfef-4edc-be40-46d8e27281f8-config-volume\") pod \"coredns-787d4945fb-6v8j8\" (UID: \"503ecc7d-dfef-4edc-be40-46d8e27281f8\") " pod="kube-system/coredns-787d4945fb-6v8j8" Feb 9 18:35:31.045364 kubelet[2169]: I0209 18:35:31.045296 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe06e181-2fd6-42d6-8ad3-fb53734220fc-tigera-ca-bundle\") pod \"calico-kube-controllers-6d59495b99-568dd\" (UID: \"fe06e181-2fd6-42d6-8ad3-fb53734220fc\") " pod="calico-system/calico-kube-controllers-6d59495b99-568dd" Feb 9 18:35:31.045364 kubelet[2169]: I0209 18:35:31.045328 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x789\" (UniqueName: \"kubernetes.io/projected/09bc0638-8c0c-4129-8ff7-de2aae58b31e-kube-api-access-8x789\") pod \"coredns-787d4945fb-qbgfk\" (UID: \"09bc0638-8c0c-4129-8ff7-de2aae58b31e\") " pod="kube-system/coredns-787d4945fb-qbgfk" Feb 9 18:35:31.045364 kubelet[2169]: I0209 18:35:31.045351 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhfg9\" (UniqueName: \"kubernetes.io/projected/fe06e181-2fd6-42d6-8ad3-fb53734220fc-kube-api-access-qhfg9\") pod \"calico-kube-controllers-6d59495b99-568dd\" (UID: \"fe06e181-2fd6-42d6-8ad3-fb53734220fc\") " pod="calico-system/calico-kube-controllers-6d59495b99-568dd" Feb 9 18:35:31.045564 kubelet[2169]: I0209 18:35:31.045393 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09bc0638-8c0c-4129-8ff7-de2aae58b31e-config-volume\") pod \"coredns-787d4945fb-qbgfk\" (UID: \"09bc0638-8c0c-4129-8ff7-de2aae58b31e\") " pod="kube-system/coredns-787d4945fb-qbgfk" Feb 9 18:35:31.045564 kubelet[2169]: I0209 18:35:31.045448 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j96p5\" (UniqueName: \"kubernetes.io/projected/503ecc7d-dfef-4edc-be40-46d8e27281f8-kube-api-access-j96p5\") pod \"coredns-787d4945fb-6v8j8\" (UID: \"503ecc7d-dfef-4edc-be40-46d8e27281f8\") " pod="kube-system/coredns-787d4945fb-6v8j8" Feb 9 18:35:31.089981 env[1223]: time="2024-02-09T18:35:31.089920066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tbbzb,Uid:85c81bc7-ead8-41f3-b0f6-13db63c2997b,Namespace:calico-system,Attempt:0,}" Feb 9 18:35:31.148861 env[1223]: time="2024-02-09T18:35:31.148787460Z" level=error msg="Failed to destroy network for sandbox \"769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:35:31.151338 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c-shm.mount: Deactivated successfully. Feb 9 18:35:31.158168 env[1223]: time="2024-02-09T18:35:31.157636980Z" level=error msg="encountered an error cleaning up failed sandbox \"769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:35:31.158168 env[1223]: time="2024-02-09T18:35:31.157697901Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tbbzb,Uid:85c81bc7-ead8-41f3-b0f6-13db63c2997b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:35:31.158310 kubelet[2169]: E0209 18:35:31.158022 2169 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:35:31.158310 kubelet[2169]: E0209 18:35:31.158101 2169 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tbbzb" Feb 9 18:35:31.158310 kubelet[2169]: E0209 18:35:31.158288 2169 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tbbzb" Feb 9 18:35:31.158418 kubelet[2169]: E0209 18:35:31.158344 2169 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tbbzb_calico-system(85c81bc7-ead8-41f3-b0f6-13db63c2997b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tbbzb_calico-system(85c81bc7-ead8-41f3-b0f6-13db63c2997b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tbbzb" podUID=85c81bc7-ead8-41f3-b0f6-13db63c2997b Feb 9 18:35:31.161772 kubelet[2169]: E0209 18:35:31.161744 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:31.162733 env[1223]: time="2024-02-09T18:35:31.162497646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\"" Feb 9 18:35:31.222132 kubelet[2169]: E0209 18:35:31.222097 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:31.222845 env[1223]: time="2024-02-09T18:35:31.222540856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-qbgfk,Uid:09bc0638-8c0c-4129-8ff7-de2aae58b31e,Namespace:kube-system,Attempt:0,}" Feb 9 18:35:31.224286 kubelet[2169]: E0209 18:35:31.224263 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:31.224721 env[1223]: time="2024-02-09T18:35:31.224690245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-6v8j8,Uid:503ecc7d-dfef-4edc-be40-46d8e27281f8,Namespace:kube-system,Attempt:0,}" Feb 9 18:35:31.230201 env[1223]: time="2024-02-09T18:35:31.230167839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d59495b99-568dd,Uid:fe06e181-2fd6-42d6-8ad3-fb53734220fc,Namespace:calico-system,Attempt:0,}" Feb 9 18:35:31.284558 env[1223]: time="2024-02-09T18:35:31.284500053Z" level=error msg="Failed to destroy network for sandbox \"976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:35:31.285446 env[1223]: time="2024-02-09T18:35:31.285407385Z" level=error msg="encountered an error cleaning up failed sandbox \"976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:35:31.285594 env[1223]: time="2024-02-09T18:35:31.285565547Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-6v8j8,Uid:503ecc7d-dfef-4edc-be40-46d8e27281f8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:35:31.285905 kubelet[2169]: E0209 18:35:31.285879 2169 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:35:31.285991 kubelet[2169]: E0209 18:35:31.285932 2169 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-6v8j8" Feb 9 18:35:31.285991 kubelet[2169]: E0209 18:35:31.285961 2169 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-6v8j8" Feb 9 18:35:31.286056 kubelet[2169]: E0209 18:35:31.286009 2169 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-6v8j8_kube-system(503ecc7d-dfef-4edc-be40-46d8e27281f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-6v8j8_kube-system(503ecc7d-dfef-4edc-be40-46d8e27281f8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-6v8j8" podUID=503ecc7d-dfef-4edc-be40-46d8e27281f8 Feb 9 18:35:31.294704 env[1223]: time="2024-02-09T18:35:31.294652270Z" level=error msg="Failed to destroy network for sandbox \"866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:35:31.295084 env[1223]: time="2024-02-09T18:35:31.295042075Z" level=error msg="encountered an error cleaning up failed sandbox \"866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:35:31.295149 env[1223]: time="2024-02-09T18:35:31.295090796Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-qbgfk,Uid:09bc0638-8c0c-4129-8ff7-de2aae58b31e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:35:31.296582 kubelet[2169]: E0209 18:35:31.295409 2169 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:35:31.296582 kubelet[2169]: E0209 18:35:31.295458 2169 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-qbgfk" Feb 9 18:35:31.296582 kubelet[2169]: E0209 18:35:31.295480 2169 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-qbgfk" Feb 9 18:35:31.296722 kubelet[2169]: E0209 18:35:31.295528 2169 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-qbgfk_kube-system(09bc0638-8c0c-4129-8ff7-de2aae58b31e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-qbgfk_kube-system(09bc0638-8c0c-4129-8ff7-de2aae58b31e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-qbgfk" podUID=09bc0638-8c0c-4129-8ff7-de2aae58b31e Feb 9 18:35:31.302645 env[1223]: time="2024-02-09T18:35:31.302593977Z" level=error msg="Failed to destroy network for sandbox \"45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:35:31.302936 env[1223]: time="2024-02-09T18:35:31.302897461Z" level=error msg="encountered an error cleaning up failed sandbox \"45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:35:31.302981 env[1223]: time="2024-02-09T18:35:31.302946982Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d59495b99-568dd,Uid:fe06e181-2fd6-42d6-8ad3-fb53734220fc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:35:31.303167 kubelet[2169]: E0209 18:35:31.303136 2169 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:35:31.303217 kubelet[2169]: E0209 18:35:31.303185 2169 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d59495b99-568dd" Feb 9 18:35:31.303217 kubelet[2169]: E0209 18:35:31.303207 2169 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d59495b99-568dd" Feb 9 18:35:31.303268 kubelet[2169]: E0209 18:35:31.303249 2169 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d59495b99-568dd_calico-system(fe06e181-2fd6-42d6-8ad3-fb53734220fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d59495b99-568dd_calico-system(fe06e181-2fd6-42d6-8ad3-fb53734220fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d59495b99-568dd" podUID=fe06e181-2fd6-42d6-8ad3-fb53734220fc Feb 9 18:35:32.164135 kubelet[2169]: I0209 18:35:32.164081 2169 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c" Feb 9 18:35:32.166098 env[1223]: time="2024-02-09T18:35:32.164930226Z" level=info msg="StopPodSandbox for \"769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c\"" Feb 9 18:35:32.166098 env[1223]: time="2024-02-09T18:35:32.165635716Z" level=info msg="StopPodSandbox for \"976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd\"" Feb 9 18:35:32.166408 kubelet[2169]: I0209 18:35:32.165216 2169 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd" Feb 9 18:35:32.166408 kubelet[2169]: I0209 18:35:32.166275 2169 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a" Feb 9 18:35:32.166831 env[1223]: time="2024-02-09T18:35:32.166758650Z" level=info msg="StopPodSandbox for \"45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a\"" Feb 9 18:35:32.169290 kubelet[2169]: I0209 18:35:32.169226 2169 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328" Feb 9 18:35:32.170837 env[1223]: time="2024-02-09T18:35:32.170804263Z" level=info msg="StopPodSandbox for \"866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328\"" Feb 9 18:35:32.199339 env[1223]: time="2024-02-09T18:35:32.199280915Z" level=error msg="StopPodSandbox for \"769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c\" failed" error="failed to destroy network for sandbox \"769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:35:32.199544 kubelet[2169]: E0209 18:35:32.199521 2169 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c" Feb 9 18:35:32.199602 kubelet[2169]: E0209 18:35:32.199577 2169 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c} Feb 9 18:35:32.199637 kubelet[2169]: E0209 18:35:32.199610 2169 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"85c81bc7-ead8-41f3-b0f6-13db63c2997b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 18:35:32.199692 kubelet[2169]: E0209 18:35:32.199639 2169 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"85c81bc7-ead8-41f3-b0f6-13db63c2997b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tbbzb" podUID=85c81bc7-ead8-41f3-b0f6-13db63c2997b Feb 9 18:35:32.202680 env[1223]: time="2024-02-09T18:35:32.202640279Z" level=error msg="StopPodSandbox for \"45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a\" failed" error="failed to destroy network for sandbox \"45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:35:32.202838 kubelet[2169]: E0209 18:35:32.202815 2169 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a" Feb 9 18:35:32.202879 kubelet[2169]: E0209 18:35:32.202845 2169 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a} Feb 9 18:35:32.202879 kubelet[2169]: E0209 18:35:32.202875 2169 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fe06e181-2fd6-42d6-8ad3-fb53734220fc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 18:35:32.202979 kubelet[2169]: E0209 18:35:32.202901 2169 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fe06e181-2fd6-42d6-8ad3-fb53734220fc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d59495b99-568dd" podUID=fe06e181-2fd6-42d6-8ad3-fb53734220fc Feb 9 18:35:32.210430 env[1223]: time="2024-02-09T18:35:32.210386500Z" level=error msg="StopPodSandbox for \"976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd\" failed" error="failed to destroy network for sandbox \"976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:35:32.210687 kubelet[2169]: E0209 18:35:32.210664 2169 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd" Feb 9 18:35:32.210763 kubelet[2169]: E0209 18:35:32.210694 2169 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd} Feb 9 18:35:32.210763 kubelet[2169]: E0209 18:35:32.210724 2169 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"503ecc7d-dfef-4edc-be40-46d8e27281f8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 18:35:32.210763 kubelet[2169]: E0209 18:35:32.210752 2169 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"503ecc7d-dfef-4edc-be40-46d8e27281f8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-6v8j8" podUID=503ecc7d-dfef-4edc-be40-46d8e27281f8 Feb 9 18:35:32.211771 env[1223]: time="2024-02-09T18:35:32.211733357Z" level=error msg="StopPodSandbox for \"866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328\" failed" error="failed to destroy network for sandbox \"866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:35:32.211888 kubelet[2169]: E0209 18:35:32.211873 2169 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328" Feb 9 18:35:32.211937 kubelet[2169]: E0209 18:35:32.211895 2169 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328} Feb 9 18:35:32.211937 kubelet[2169]: E0209 18:35:32.211921 2169 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"09bc0638-8c0c-4129-8ff7-de2aae58b31e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 18:35:32.212028 kubelet[2169]: E0209 18:35:32.211941 2169 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"09bc0638-8c0c-4129-8ff7-de2aae58b31e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-qbgfk" podUID=09bc0638-8c0c-4129-8ff7-de2aae58b31e Feb 9 18:35:36.216776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount793940867.mount: Deactivated successfully. Feb 9 18:35:36.515107 env[1223]: time="2024-02-09T18:35:36.514962679Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:36.516887 env[1223]: time="2024-02-09T18:35:36.516855501Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c445639cb28807ced09724016dc3b273b170b14d3b3d0c39b1affa1cc6b68774,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:36.518284 env[1223]: time="2024-02-09T18:35:36.518251117Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:36.520322 env[1223]: time="2024-02-09T18:35:36.520292620Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:a45dffb21a0e9ca8962f36359a2ab776beeecd93843543c2fa1745d7bbb0f754,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:36.520561 env[1223]: time="2024-02-09T18:35:36.520535503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\" returns image reference \"sha256:c445639cb28807ced09724016dc3b273b170b14d3b3d0c39b1affa1cc6b68774\"" Feb 9 18:35:36.533026 env[1223]: time="2024-02-09T18:35:36.532982966Z" level=info msg="CreateContainer within sandbox \"750060e68eb17c7267351bfe35114c15f22e01f8a5f21874bc2b2b2f59992d7e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 9 18:35:36.548028 env[1223]: time="2024-02-09T18:35:36.547957858Z" level=info msg="CreateContainer within sandbox \"750060e68eb17c7267351bfe35114c15f22e01f8a5f21874bc2b2b2f59992d7e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ff74826e677b5e6a7386f16c2ab84443ed7382c9c98cc76278fd2a0ed8f4d539\"" Feb 9 18:35:36.548447 env[1223]: time="2024-02-09T18:35:36.548402824Z" level=info msg="StartContainer for \"ff74826e677b5e6a7386f16c2ab84443ed7382c9c98cc76278fd2a0ed8f4d539\"" Feb 9 18:35:36.629096 env[1223]: time="2024-02-09T18:35:36.629033751Z" level=info msg="StartContainer for \"ff74826e677b5e6a7386f16c2ab84443ed7382c9c98cc76278fd2a0ed8f4d539\" returns successfully" Feb 9 18:35:36.760015 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 9 18:35:36.760132 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved. Feb 9 18:35:37.181834 kubelet[2169]: E0209 18:35:37.181738 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:37.196512 kubelet[2169]: I0209 18:35:37.195218 2169 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-22rnx" podStartSLOduration=-9.22337202165959e+09 pod.CreationTimestamp="2024-02-09 18:35:22 +0000 UTC" firstStartedPulling="2024-02-09 18:35:23.468748907 +0000 UTC m=+20.518954314" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:35:37.19504124 +0000 UTC m=+34.245246687" watchObservedRunningTime="2024-02-09 18:35:37.195185162 +0000 UTC m=+34.245390609" Feb 9 18:35:38.074000 audit[3402]: AVC avc: denied { write } for pid=3402 comm="tee" name="fd" dev="proc" ino=19699 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 18:35:38.074000 audit[3413]: AVC avc: denied { write } for pid=3413 comm="tee" name="fd" dev="proc" ino=19703 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 18:35:38.079263 kernel: audit: type=1400 audit(1707503738.074:276): avc: denied { write } for pid=3402 comm="tee" name="fd" dev="proc" ino=19699 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 18:35:38.079336 kernel: audit: type=1400 audit(1707503738.074:277): avc: denied { write } for pid=3413 comm="tee" name="fd" dev="proc" ino=19703 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 18:35:38.079369 kernel: audit: type=1300 audit(1707503738.074:277): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff7236990 a2=241 a3=1b6 items=1 ppid=3367 pid=3413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:38.074000 audit[3413]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff7236990 a2=241 a3=1b6 items=1 ppid=3367 pid=3413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:38.074000 audit: CWD cwd="/etc/service/enabled/bird/log" Feb 9 18:35:38.082539 kernel: audit: type=1307 audit(1707503738.074:277): cwd="/etc/service/enabled/bird/log" Feb 9 18:35:38.082606 kernel: audit: type=1302 audit(1707503738.074:277): item=0 name="/dev/fd/63" inode=19692 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:35:38.074000 audit: PATH item=0 name="/dev/fd/63" inode=19692 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:35:38.084348 kernel: audit: type=1327 audit(1707503738.074:277): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 18:35:38.074000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 18:35:38.085909 kernel: audit: type=1300 audit(1707503738.074:276): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe6334980 a2=241 a3=1b6 items=1 ppid=3359 pid=3402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:38.074000 audit[3402]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe6334980 a2=241 a3=1b6 items=1 ppid=3359 pid=3402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:38.074000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Feb 9 18:35:38.092508 kernel: audit: type=1307 audit(1707503738.074:276): cwd="/etc/service/enabled/node-status-reporter/log" Feb 9 18:35:38.092563 kernel: audit: type=1302 audit(1707503738.074:276): item=0 name="/dev/fd/63" inode=19685 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:35:38.074000 audit: PATH item=0 name="/dev/fd/63" inode=19685 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:35:38.074000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 18:35:38.095931 kernel: audit: type=1327 audit(1707503738.074:276): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 18:35:38.081000 audit[3421]: AVC avc: denied { write } for pid=3421 comm="tee" name="fd" dev="proc" ino=20675 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 18:35:38.081000 audit[3421]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd712f98f a2=241 a3=1b6 items=1 ppid=3369 pid=3421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:38.081000 audit: CWD cwd="/etc/service/enabled/confd/log" Feb 9 18:35:38.081000 audit: PATH item=0 name="/dev/fd/63" inode=20670 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:35:38.081000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 18:35:38.089000 audit[3417]: AVC avc: denied { write } for pid=3417 comm="tee" name="fd" dev="proc" ino=18988 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 18:35:38.089000 audit[3417]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffff18d97f a2=241 a3=1b6 items=1 ppid=3373 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:38.089000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 9 18:35:38.089000 audit: PATH item=0 name="/dev/fd/63" inode=18985 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:35:38.089000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 18:35:38.091000 audit[3427]: AVC avc: denied { write } for pid=3427 comm="tee" name="fd" dev="proc" ino=19712 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 18:35:38.091000 audit[3427]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff36e8991 a2=241 a3=1b6 items=1 ppid=3366 pid=3427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:38.091000 audit: CWD cwd="/etc/service/enabled/cni/log" Feb 9 18:35:38.091000 audit: PATH item=0 name="/dev/fd/63" inode=19705 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:35:38.091000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 18:35:38.094000 audit[3431]: AVC avc: denied { write } for pid=3431 comm="tee" name="fd" dev="proc" ino=18034 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 18:35:38.094000 audit[3431]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc3f7898f a2=241 a3=1b6 items=1 ppid=3364 pid=3431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:38.094000 audit: CWD cwd="/etc/service/enabled/felix/log" Feb 9 18:35:38.094000 audit: PATH item=0 name="/dev/fd/63" inode=18030 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:35:38.094000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 18:35:38.121000 audit[3434]: AVC avc: denied { write } for pid=3434 comm="tee" name="fd" dev="proc" ino=18046 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 18:35:38.121000 audit[3434]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd366398f a2=241 a3=1b6 items=1 ppid=3360 pid=3434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:38.121000 audit: CWD cwd="/etc/service/enabled/bird6/log" Feb 9 18:35:38.121000 audit: PATH item=0 name="/dev/fd/63" inode=18031 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:35:38.121000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 18:35:38.182973 kubelet[2169]: E0209 18:35:38.182934 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:38.198529 systemd[1]: run-containerd-runc-k8s.io-ff74826e677b5e6a7386f16c2ab84443ed7382c9c98cc76278fd2a0ed8f4d539-runc.kiSxbP.mount: Deactivated successfully. Feb 9 18:35:43.088310 env[1223]: time="2024-02-09T18:35:43.088264545Z" level=info msg="StopPodSandbox for \"866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328\"" Feb 9 18:35:43.088951 env[1223]: time="2024-02-09T18:35:43.088890751Z" level=info msg="StopPodSandbox for \"976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd\"" Feb 9 18:35:43.316155 env[1223]: 2024-02-09 18:35:43.177 [INFO][3612] k8s.go 578: Cleaning up netns ContainerID="976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd" Feb 9 18:35:43.316155 env[1223]: 2024-02-09 18:35:43.177 [INFO][3612] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd" iface="eth0" netns="/var/run/netns/cni-e03cbc38-1d65-3b19-93fd-3f7c22c7f2df" Feb 9 18:35:43.316155 env[1223]: 2024-02-09 18:35:43.178 [INFO][3612] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd" iface="eth0" netns="/var/run/netns/cni-e03cbc38-1d65-3b19-93fd-3f7c22c7f2df" Feb 9 18:35:43.316155 env[1223]: 2024-02-09 18:35:43.179 [INFO][3612] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd" iface="eth0" netns="/var/run/netns/cni-e03cbc38-1d65-3b19-93fd-3f7c22c7f2df" Feb 9 18:35:43.316155 env[1223]: 2024-02-09 18:35:43.179 [INFO][3612] k8s.go 585: Releasing IP address(es) ContainerID="976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd" Feb 9 18:35:43.316155 env[1223]: 2024-02-09 18:35:43.179 [INFO][3612] utils.go 188: Calico CNI releasing IP address ContainerID="976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd" Feb 9 18:35:43.316155 env[1223]: 2024-02-09 18:35:43.295 [INFO][3626] ipam_plugin.go 415: Releasing address using handleID ContainerID="976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd" HandleID="k8s-pod-network.976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd" Workload="localhost-k8s-coredns--787d4945fb--6v8j8-eth0" Feb 9 18:35:43.316155 env[1223]: 2024-02-09 18:35:43.295 [INFO][3626] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:35:43.316155 env[1223]: 2024-02-09 18:35:43.296 [INFO][3626] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:35:43.316155 env[1223]: 2024-02-09 18:35:43.310 [WARNING][3626] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd" HandleID="k8s-pod-network.976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd" Workload="localhost-k8s-coredns--787d4945fb--6v8j8-eth0" Feb 9 18:35:43.316155 env[1223]: 2024-02-09 18:35:43.310 [INFO][3626] ipam_plugin.go 443: Releasing address using workloadID ContainerID="976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd" HandleID="k8s-pod-network.976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd" Workload="localhost-k8s-coredns--787d4945fb--6v8j8-eth0" Feb 9 18:35:43.316155 env[1223]: 2024-02-09 18:35:43.311 [INFO][3626] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:35:43.316155 env[1223]: 2024-02-09 18:35:43.314 [INFO][3612] k8s.go 591: Teardown processing complete. ContainerID="976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd" Feb 9 18:35:43.318181 systemd[1]: run-netns-cni\x2de03cbc38\x2d1d65\x2d3b19\x2d93fd\x2d3f7c22c7f2df.mount: Deactivated successfully. Feb 9 18:35:43.318472 env[1223]: time="2024-02-09T18:35:43.318310434Z" level=info msg="TearDown network for sandbox \"976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd\" successfully" Feb 9 18:35:43.318472 env[1223]: time="2024-02-09T18:35:43.318346835Z" level=info msg="StopPodSandbox for \"976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd\" returns successfully" Feb 9 18:35:43.318697 kubelet[2169]: E0209 18:35:43.318672 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:43.319125 env[1223]: time="2024-02-09T18:35:43.319080482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-6v8j8,Uid:503ecc7d-dfef-4edc-be40-46d8e27281f8,Namespace:kube-system,Attempt:1,}" Feb 9 18:35:43.335388 env[1223]: 2024-02-09 18:35:43.179 [INFO][3611] k8s.go 578: Cleaning up netns ContainerID="866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328" Feb 9 18:35:43.335388 env[1223]: 2024-02-09 18:35:43.179 [INFO][3611] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328" iface="eth0" netns="/var/run/netns/cni-e8341e88-0d95-e806-d7b3-3d20d571c3e6" Feb 9 18:35:43.335388 env[1223]: 2024-02-09 18:35:43.180 [INFO][3611] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328" iface="eth0" netns="/var/run/netns/cni-e8341e88-0d95-e806-d7b3-3d20d571c3e6" Feb 9 18:35:43.335388 env[1223]: 2024-02-09 18:35:43.180 [INFO][3611] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328" iface="eth0" netns="/var/run/netns/cni-e8341e88-0d95-e806-d7b3-3d20d571c3e6" Feb 9 18:35:43.335388 env[1223]: 2024-02-09 18:35:43.180 [INFO][3611] k8s.go 585: Releasing IP address(es) ContainerID="866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328" Feb 9 18:35:43.335388 env[1223]: 2024-02-09 18:35:43.180 [INFO][3611] utils.go 188: Calico CNI releasing IP address ContainerID="866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328" Feb 9 18:35:43.335388 env[1223]: 2024-02-09 18:35:43.296 [INFO][3627] ipam_plugin.go 415: Releasing address using handleID ContainerID="866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328" HandleID="k8s-pod-network.866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328" Workload="localhost-k8s-coredns--787d4945fb--qbgfk-eth0" Feb 9 18:35:43.335388 env[1223]: 2024-02-09 18:35:43.296 [INFO][3627] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:35:43.335388 env[1223]: 2024-02-09 18:35:43.311 [INFO][3627] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:35:43.335388 env[1223]: 2024-02-09 18:35:43.329 [WARNING][3627] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328" HandleID="k8s-pod-network.866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328" Workload="localhost-k8s-coredns--787d4945fb--qbgfk-eth0" Feb 9 18:35:43.335388 env[1223]: 2024-02-09 18:35:43.330 [INFO][3627] ipam_plugin.go 443: Releasing address using workloadID ContainerID="866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328" HandleID="k8s-pod-network.866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328" Workload="localhost-k8s-coredns--787d4945fb--qbgfk-eth0" Feb 9 18:35:43.335388 env[1223]: 2024-02-09 18:35:43.331 [INFO][3627] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:35:43.335388 env[1223]: 2024-02-09 18:35:43.333 [INFO][3611] k8s.go 591: Teardown processing complete. ContainerID="866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328" Feb 9 18:35:43.337387 systemd[1]: run-netns-cni\x2de8341e88\x2d0d95\x2de806\x2dd7b3\x2d3d20d571c3e6.mount: Deactivated successfully. Feb 9 18:35:43.337553 env[1223]: time="2024-02-09T18:35:43.337507059Z" level=info msg="TearDown network for sandbox \"866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328\" successfully" Feb 9 18:35:43.337553 env[1223]: time="2024-02-09T18:35:43.337548539Z" level=info msg="StopPodSandbox for \"866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328\" returns successfully" Feb 9 18:35:43.337857 kubelet[2169]: E0209 18:35:43.337838 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:43.338610 env[1223]: time="2024-02-09T18:35:43.338529388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-qbgfk,Uid:09bc0638-8c0c-4129-8ff7-de2aae58b31e,Namespace:kube-system,Attempt:1,}" Feb 9 18:35:43.540403 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 18:35:43.540507 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali303a35b8668: link becomes ready Feb 9 18:35:43.541488 systemd-networkd[1099]: cali303a35b8668: Link UP Feb 9 18:35:43.541625 systemd-networkd[1099]: cali303a35b8668: Gained carrier Feb 9 18:35:43.552741 env[1223]: 2024-02-09 18:35:43.450 [INFO][3661] utils.go 100: File /var/lib/calico/mtu does not exist Feb 9 18:35:43.552741 env[1223]: 2024-02-09 18:35:43.463 [INFO][3661] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--787d4945fb--qbgfk-eth0 coredns-787d4945fb- kube-system 09bc0638-8c0c-4129-8ff7-de2aae58b31e 685 0 2024-02-09 18:35:17 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-787d4945fb-qbgfk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali303a35b8668 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="33a1c014fd09c2318ae73dd4b67d7c7da9fbac689a7ece3d3a0144f0ab5fb229" Namespace="kube-system" Pod="coredns-787d4945fb-qbgfk" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--qbgfk-" Feb 9 18:35:43.552741 env[1223]: 2024-02-09 18:35:43.463 [INFO][3661] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="33a1c014fd09c2318ae73dd4b67d7c7da9fbac689a7ece3d3a0144f0ab5fb229" Namespace="kube-system" Pod="coredns-787d4945fb-qbgfk" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--qbgfk-eth0" Feb 9 18:35:43.552741 env[1223]: 2024-02-09 18:35:43.497 [INFO][3687] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="33a1c014fd09c2318ae73dd4b67d7c7da9fbac689a7ece3d3a0144f0ab5fb229" HandleID="k8s-pod-network.33a1c014fd09c2318ae73dd4b67d7c7da9fbac689a7ece3d3a0144f0ab5fb229" Workload="localhost-k8s-coredns--787d4945fb--qbgfk-eth0" Feb 9 18:35:43.552741 env[1223]: 2024-02-09 18:35:43.513 [INFO][3687] ipam_plugin.go 268: Auto assigning IP ContainerID="33a1c014fd09c2318ae73dd4b67d7c7da9fbac689a7ece3d3a0144f0ab5fb229" HandleID="k8s-pod-network.33a1c014fd09c2318ae73dd4b67d7c7da9fbac689a7ece3d3a0144f0ab5fb229" Workload="localhost-k8s-coredns--787d4945fb--qbgfk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400062f370), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-787d4945fb-qbgfk", "timestamp":"2024-02-09 18:35:43.497898359 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 18:35:43.552741 env[1223]: 2024-02-09 18:35:43.513 [INFO][3687] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:35:43.552741 env[1223]: 2024-02-09 18:35:43.513 [INFO][3687] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:35:43.552741 env[1223]: 2024-02-09 18:35:43.513 [INFO][3687] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 9 18:35:43.552741 env[1223]: 2024-02-09 18:35:43.515 [INFO][3687] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.33a1c014fd09c2318ae73dd4b67d7c7da9fbac689a7ece3d3a0144f0ab5fb229" host="localhost" Feb 9 18:35:43.552741 env[1223]: 2024-02-09 18:35:43.519 [INFO][3687] ipam.go 372: Looking up existing affinities for host host="localhost" Feb 9 18:35:43.552741 env[1223]: 2024-02-09 18:35:43.522 [INFO][3687] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 9 18:35:43.552741 env[1223]: 2024-02-09 18:35:43.523 [INFO][3687] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 9 18:35:43.552741 env[1223]: 2024-02-09 18:35:43.524 [INFO][3687] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 9 18:35:43.552741 env[1223]: 2024-02-09 18:35:43.524 [INFO][3687] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.33a1c014fd09c2318ae73dd4b67d7c7da9fbac689a7ece3d3a0144f0ab5fb229" host="localhost" Feb 9 18:35:43.552741 env[1223]: 2024-02-09 18:35:43.526 [INFO][3687] ipam.go 1682: Creating new handle: k8s-pod-network.33a1c014fd09c2318ae73dd4b67d7c7da9fbac689a7ece3d3a0144f0ab5fb229 Feb 9 18:35:43.552741 env[1223]: 2024-02-09 18:35:43.528 [INFO][3687] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.33a1c014fd09c2318ae73dd4b67d7c7da9fbac689a7ece3d3a0144f0ab5fb229" host="localhost" Feb 9 18:35:43.552741 env[1223]: 2024-02-09 18:35:43.532 [INFO][3687] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.33a1c014fd09c2318ae73dd4b67d7c7da9fbac689a7ece3d3a0144f0ab5fb229" host="localhost" Feb 9 18:35:43.552741 env[1223]: 2024-02-09 18:35:43.532 [INFO][3687] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.33a1c014fd09c2318ae73dd4b67d7c7da9fbac689a7ece3d3a0144f0ab5fb229" host="localhost" Feb 9 18:35:43.552741 env[1223]: 2024-02-09 18:35:43.532 [INFO][3687] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:35:43.552741 env[1223]: 2024-02-09 18:35:43.532 [INFO][3687] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="33a1c014fd09c2318ae73dd4b67d7c7da9fbac689a7ece3d3a0144f0ab5fb229" HandleID="k8s-pod-network.33a1c014fd09c2318ae73dd4b67d7c7da9fbac689a7ece3d3a0144f0ab5fb229" Workload="localhost-k8s-coredns--787d4945fb--qbgfk-eth0" Feb 9 18:35:43.553340 env[1223]: 2024-02-09 18:35:43.534 [INFO][3661] k8s.go 385: Populated endpoint ContainerID="33a1c014fd09c2318ae73dd4b67d7c7da9fbac689a7ece3d3a0144f0ab5fb229" Namespace="kube-system" Pod="coredns-787d4945fb-qbgfk" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--qbgfk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--qbgfk-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"09bc0638-8c0c-4129-8ff7-de2aae58b31e", ResourceVersion:"685", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 35, 17, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-787d4945fb-qbgfk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali303a35b8668", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:35:43.553340 env[1223]: 2024-02-09 18:35:43.534 [INFO][3661] k8s.go 386: Calico CNI using IPs: [192.168.88.129/32] ContainerID="33a1c014fd09c2318ae73dd4b67d7c7da9fbac689a7ece3d3a0144f0ab5fb229" Namespace="kube-system" Pod="coredns-787d4945fb-qbgfk" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--qbgfk-eth0" Feb 9 18:35:43.553340 env[1223]: 2024-02-09 18:35:43.534 [INFO][3661] dataplane_linux.go 68: Setting the host side veth name to cali303a35b8668 ContainerID="33a1c014fd09c2318ae73dd4b67d7c7da9fbac689a7ece3d3a0144f0ab5fb229" Namespace="kube-system" Pod="coredns-787d4945fb-qbgfk" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--qbgfk-eth0" Feb 9 18:35:43.553340 env[1223]: 2024-02-09 18:35:43.541 [INFO][3661] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="33a1c014fd09c2318ae73dd4b67d7c7da9fbac689a7ece3d3a0144f0ab5fb229" Namespace="kube-system" Pod="coredns-787d4945fb-qbgfk" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--qbgfk-eth0" Feb 9 18:35:43.553340 env[1223]: 2024-02-09 18:35:43.541 [INFO][3661] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="33a1c014fd09c2318ae73dd4b67d7c7da9fbac689a7ece3d3a0144f0ab5fb229" Namespace="kube-system" Pod="coredns-787d4945fb-qbgfk" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--qbgfk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--qbgfk-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"09bc0638-8c0c-4129-8ff7-de2aae58b31e", ResourceVersion:"685", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 35, 17, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"33a1c014fd09c2318ae73dd4b67d7c7da9fbac689a7ece3d3a0144f0ab5fb229", Pod:"coredns-787d4945fb-qbgfk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali303a35b8668", MAC:"52:b0:9c:c9:b5:ff", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:35:43.553340 env[1223]: 2024-02-09 18:35:43.551 [INFO][3661] k8s.go 491: Wrote updated endpoint to datastore ContainerID="33a1c014fd09c2318ae73dd4b67d7c7da9fbac689a7ece3d3a0144f0ab5fb229" Namespace="kube-system" Pod="coredns-787d4945fb-qbgfk" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--qbgfk-eth0" Feb 9 18:35:43.570901 env[1223]: time="2024-02-09T18:35:43.570695898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:35:43.570901 env[1223]: time="2024-02-09T18:35:43.570741218Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:35:43.570901 env[1223]: time="2024-02-09T18:35:43.570751738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:35:43.570901 env[1223]: time="2024-02-09T18:35:43.570872099Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/33a1c014fd09c2318ae73dd4b67d7c7da9fbac689a7ece3d3a0144f0ab5fb229 pid=3725 runtime=io.containerd.runc.v2 Feb 9 18:35:43.572858 systemd-networkd[1099]: cali300d05296d6: Link UP Feb 9 18:35:43.573555 systemd-networkd[1099]: cali300d05296d6: Gained carrier Feb 9 18:35:43.577638 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali300d05296d6: link becomes ready Feb 9 18:35:43.588227 env[1223]: 2024-02-09 18:35:43.453 [INFO][3663] utils.go 100: File /var/lib/calico/mtu does not exist Feb 9 18:35:43.588227 env[1223]: 2024-02-09 18:35:43.475 [INFO][3663] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--787d4945fb--6v8j8-eth0 coredns-787d4945fb- kube-system 503ecc7d-dfef-4edc-be40-46d8e27281f8 684 0 2024-02-09 18:35:17 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-787d4945fb-6v8j8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali300d05296d6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="016c0224d42942aa7598c729190f98fad640381e0f018f97aef1ab4d3c298a46" Namespace="kube-system" Pod="coredns-787d4945fb-6v8j8" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--6v8j8-" Feb 9 18:35:43.588227 env[1223]: 2024-02-09 18:35:43.475 [INFO][3663] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="016c0224d42942aa7598c729190f98fad640381e0f018f97aef1ab4d3c298a46" Namespace="kube-system" Pod="coredns-787d4945fb-6v8j8" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--6v8j8-eth0" Feb 9 18:35:43.588227 env[1223]: 2024-02-09 18:35:43.497 [INFO][3688] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="016c0224d42942aa7598c729190f98fad640381e0f018f97aef1ab4d3c298a46" HandleID="k8s-pod-network.016c0224d42942aa7598c729190f98fad640381e0f018f97aef1ab4d3c298a46" Workload="localhost-k8s-coredns--787d4945fb--6v8j8-eth0" Feb 9 18:35:43.588227 env[1223]: 2024-02-09 18:35:43.515 [INFO][3688] ipam_plugin.go 268: Auto assigning IP ContainerID="016c0224d42942aa7598c729190f98fad640381e0f018f97aef1ab4d3c298a46" HandleID="k8s-pod-network.016c0224d42942aa7598c729190f98fad640381e0f018f97aef1ab4d3c298a46" Workload="localhost-k8s-coredns--787d4945fb--6v8j8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400052c2a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-787d4945fb-6v8j8", "timestamp":"2024-02-09 18:35:43.497884639 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 18:35:43.588227 env[1223]: 2024-02-09 18:35:43.515 [INFO][3688] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:35:43.588227 env[1223]: 2024-02-09 18:35:43.532 [INFO][3688] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:35:43.588227 env[1223]: 2024-02-09 18:35:43.533 [INFO][3688] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 9 18:35:43.588227 env[1223]: 2024-02-09 18:35:43.535 [INFO][3688] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.016c0224d42942aa7598c729190f98fad640381e0f018f97aef1ab4d3c298a46" host="localhost" Feb 9 18:35:43.588227 env[1223]: 2024-02-09 18:35:43.546 [INFO][3688] ipam.go 372: Looking up existing affinities for host host="localhost" Feb 9 18:35:43.588227 env[1223]: 2024-02-09 18:35:43.551 [INFO][3688] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 9 18:35:43.588227 env[1223]: 2024-02-09 18:35:43.555 [INFO][3688] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 9 18:35:43.588227 env[1223]: 2024-02-09 18:35:43.557 [INFO][3688] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 9 18:35:43.588227 env[1223]: 2024-02-09 18:35:43.557 [INFO][3688] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.016c0224d42942aa7598c729190f98fad640381e0f018f97aef1ab4d3c298a46" host="localhost" Feb 9 18:35:43.588227 env[1223]: 2024-02-09 18:35:43.561 [INFO][3688] ipam.go 1682: Creating new handle: k8s-pod-network.016c0224d42942aa7598c729190f98fad640381e0f018f97aef1ab4d3c298a46 Feb 9 18:35:43.588227 env[1223]: 2024-02-09 18:35:43.563 [INFO][3688] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.016c0224d42942aa7598c729190f98fad640381e0f018f97aef1ab4d3c298a46" host="localhost" Feb 9 18:35:43.588227 env[1223]: 2024-02-09 18:35:43.569 [INFO][3688] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.016c0224d42942aa7598c729190f98fad640381e0f018f97aef1ab4d3c298a46" host="localhost" Feb 9 18:35:43.588227 env[1223]: 2024-02-09 18:35:43.569 [INFO][3688] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.016c0224d42942aa7598c729190f98fad640381e0f018f97aef1ab4d3c298a46" host="localhost" Feb 9 18:35:43.588227 env[1223]: 2024-02-09 18:35:43.569 [INFO][3688] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:35:43.588227 env[1223]: 2024-02-09 18:35:43.569 [INFO][3688] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="016c0224d42942aa7598c729190f98fad640381e0f018f97aef1ab4d3c298a46" HandleID="k8s-pod-network.016c0224d42942aa7598c729190f98fad640381e0f018f97aef1ab4d3c298a46" Workload="localhost-k8s-coredns--787d4945fb--6v8j8-eth0" Feb 9 18:35:43.589308 env[1223]: 2024-02-09 18:35:43.571 [INFO][3663] k8s.go 385: Populated endpoint ContainerID="016c0224d42942aa7598c729190f98fad640381e0f018f97aef1ab4d3c298a46" Namespace="kube-system" Pod="coredns-787d4945fb-6v8j8" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--6v8j8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--6v8j8-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"503ecc7d-dfef-4edc-be40-46d8e27281f8", ResourceVersion:"684", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 35, 17, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-787d4945fb-6v8j8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali300d05296d6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:35:43.589308 env[1223]: 2024-02-09 18:35:43.571 [INFO][3663] k8s.go 386: Calico CNI using IPs: [192.168.88.130/32] ContainerID="016c0224d42942aa7598c729190f98fad640381e0f018f97aef1ab4d3c298a46" Namespace="kube-system" Pod="coredns-787d4945fb-6v8j8" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--6v8j8-eth0" Feb 9 18:35:43.589308 env[1223]: 2024-02-09 18:35:43.571 [INFO][3663] dataplane_linux.go 68: Setting the host side veth name to cali300d05296d6 ContainerID="016c0224d42942aa7598c729190f98fad640381e0f018f97aef1ab4d3c298a46" Namespace="kube-system" Pod="coredns-787d4945fb-6v8j8" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--6v8j8-eth0" Feb 9 18:35:43.589308 env[1223]: 2024-02-09 18:35:43.573 [INFO][3663] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="016c0224d42942aa7598c729190f98fad640381e0f018f97aef1ab4d3c298a46" Namespace="kube-system" Pod="coredns-787d4945fb-6v8j8" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--6v8j8-eth0" Feb 9 18:35:43.589308 env[1223]: 2024-02-09 18:35:43.578 [INFO][3663] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="016c0224d42942aa7598c729190f98fad640381e0f018f97aef1ab4d3c298a46" Namespace="kube-system" Pod="coredns-787d4945fb-6v8j8" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--6v8j8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--6v8j8-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"503ecc7d-dfef-4edc-be40-46d8e27281f8", ResourceVersion:"684", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 35, 17, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"016c0224d42942aa7598c729190f98fad640381e0f018f97aef1ab4d3c298a46", Pod:"coredns-787d4945fb-6v8j8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali300d05296d6", MAC:"4e:f7:7c:e5:59:0a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:35:43.589308 env[1223]: 2024-02-09 18:35:43.586 [INFO][3663] k8s.go 491: Wrote updated endpoint to datastore ContainerID="016c0224d42942aa7598c729190f98fad640381e0f018f97aef1ab4d3c298a46" Namespace="kube-system" Pod="coredns-787d4945fb-6v8j8" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--6v8j8-eth0" Feb 9 18:35:43.600617 systemd-resolved[1156]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 18:35:43.600923 env[1223]: time="2024-02-09T18:35:43.600682346Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:35:43.600923 env[1223]: time="2024-02-09T18:35:43.600727906Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:35:43.600923 env[1223]: time="2024-02-09T18:35:43.600739986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:35:43.601032 env[1223]: time="2024-02-09T18:35:43.600882988Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/016c0224d42942aa7598c729190f98fad640381e0f018f97aef1ab4d3c298a46 pid=3770 runtime=io.containerd.runc.v2 Feb 9 18:35:43.625979 env[1223]: time="2024-02-09T18:35:43.625940748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-qbgfk,Uid:09bc0638-8c0c-4129-8ff7-de2aae58b31e,Namespace:kube-system,Attempt:1,} returns sandbox id \"33a1c014fd09c2318ae73dd4b67d7c7da9fbac689a7ece3d3a0144f0ab5fb229\"" Feb 9 18:35:43.626957 kubelet[2169]: E0209 18:35:43.626813 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:43.630302 env[1223]: time="2024-02-09T18:35:43.629962827Z" level=info msg="CreateContainer within sandbox \"33a1c014fd09c2318ae73dd4b67d7c7da9fbac689a7ece3d3a0144f0ab5fb229\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 18:35:43.633413 systemd-resolved[1156]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 18:35:43.640827 env[1223]: time="2024-02-09T18:35:43.640781451Z" level=info msg="CreateContainer within sandbox \"33a1c014fd09c2318ae73dd4b67d7c7da9fbac689a7ece3d3a0144f0ab5fb229\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1de20493d7b17a3be9d98f7bd0826269a47e857e9a88d38c912ec49333f41d79\"" Feb 9 18:35:43.643004 env[1223]: time="2024-02-09T18:35:43.642967912Z" level=info msg="StartContainer for \"1de20493d7b17a3be9d98f7bd0826269a47e857e9a88d38c912ec49333f41d79\"" Feb 9 18:35:43.654575 env[1223]: time="2024-02-09T18:35:43.654543303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-6v8j8,Uid:503ecc7d-dfef-4edc-be40-46d8e27281f8,Namespace:kube-system,Attempt:1,} returns sandbox id \"016c0224d42942aa7598c729190f98fad640381e0f018f97aef1ab4d3c298a46\"" Feb 9 18:35:43.655118 kubelet[2169]: E0209 18:35:43.655102 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:43.663184 env[1223]: time="2024-02-09T18:35:43.663140665Z" level=info msg="CreateContainer within sandbox \"016c0224d42942aa7598c729190f98fad640381e0f018f97aef1ab4d3c298a46\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 18:35:43.674111 env[1223]: time="2024-02-09T18:35:43.674069530Z" level=info msg="CreateContainer within sandbox \"016c0224d42942aa7598c729190f98fad640381e0f018f97aef1ab4d3c298a46\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9bd41ff7617b88a7746d20fda82e40b4f1438c2c719c13298752cbedf26cb7dc\"" Feb 9 18:35:43.675540 env[1223]: time="2024-02-09T18:35:43.675514704Z" level=info msg="StartContainer for \"9bd41ff7617b88a7746d20fda82e40b4f1438c2c719c13298752cbedf26cb7dc\"" Feb 9 18:35:43.703413 env[1223]: time="2024-02-09T18:35:43.702283761Z" level=info msg="StartContainer for \"1de20493d7b17a3be9d98f7bd0826269a47e857e9a88d38c912ec49333f41d79\" returns successfully" Feb 9 18:35:43.732487 env[1223]: time="2024-02-09T18:35:43.732445771Z" level=info msg="StartContainer for \"9bd41ff7617b88a7746d20fda82e40b4f1438c2c719c13298752cbedf26cb7dc\" returns successfully" Feb 9 18:35:43.939862 systemd[1]: Started sshd@7-10.0.0.89:22-10.0.0.1:49846.service. Feb 9 18:35:43.941303 kernel: kauditd_printk_skb: 25 callbacks suppressed Feb 9 18:35:43.941398 kernel: audit: type=1130 audit(1707503743.938:283): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.89:22-10.0.0.1:49846 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:43.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.89:22-10.0.0.1:49846 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:43.989000 audit[3889]: USER_ACCT pid=3889 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:43.990768 sshd[3889]: Accepted publickey for core from 10.0.0.1 port 49846 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:35:43.992754 sshd[3889]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:35:43.990000 audit[3889]: CRED_ACQ pid=3889 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:43.998234 kernel: audit: type=1101 audit(1707503743.989:284): pid=3889 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:43.998381 kernel: audit: type=1103 audit(1707503743.990:285): pid=3889 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:43.998426 kernel: audit: type=1006 audit(1707503743.990:286): pid=3889 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Feb 9 18:35:43.998455 kernel: audit: type=1300 audit(1707503743.990:286): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffebfa4520 a2=3 a3=1 items=0 ppid=1 pid=3889 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:43.990000 audit[3889]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffebfa4520 a2=3 a3=1 items=0 ppid=1 pid=3889 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:43.998191 systemd[1]: Started session-8.scope. Feb 9 18:35:43.998539 systemd-logind[1201]: New session 8 of user core. Feb 9 18:35:43.990000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 18:35:44.000529 kernel: audit: type=1327 audit(1707503743.990:286): proctitle=737368643A20636F7265205B707269765D Feb 9 18:35:44.001000 audit[3889]: USER_START pid=3889 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:44.002000 audit[3892]: CRED_ACQ pid=3892 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:44.007297 kernel: audit: type=1105 audit(1707503744.001:287): pid=3889 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:44.007366 kernel: audit: type=1103 audit(1707503744.002:288): pid=3892 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:44.088391 env[1223]: time="2024-02-09T18:35:44.088302450Z" level=info msg="StopPodSandbox for \"45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a\"" Feb 9 18:35:44.184158 sshd[3889]: pam_unix(sshd:session): session closed for user core Feb 9 18:35:44.190021 kernel: audit: type=1106 audit(1707503744.183:289): pid=3889 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:44.190111 kernel: audit: type=1104 audit(1707503744.184:290): pid=3889 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:44.183000 audit[3889]: USER_END pid=3889 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:44.184000 audit[3889]: CRED_DISP pid=3889 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:44.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.89:22-10.0.0.1:49846 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:44.186644 systemd[1]: sshd@7-10.0.0.89:22-10.0.0.1:49846.service: Deactivated successfully. Feb 9 18:35:44.187472 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 18:35:44.188203 systemd-logind[1201]: Session 8 logged out. Waiting for processes to exit. Feb 9 18:35:44.188834 systemd-logind[1201]: Removed session 8. Feb 9 18:35:44.202087 kubelet[2169]: E0209 18:35:44.202047 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:44.206925 kubelet[2169]: E0209 18:35:44.206905 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:44.220875 kubelet[2169]: I0209 18:35:44.220549 2169 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-qbgfk" podStartSLOduration=27.220509372 pod.CreationTimestamp="2024-02-09 18:35:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:35:44.211603328 +0000 UTC m=+41.261808775" watchObservedRunningTime="2024-02-09 18:35:44.220509372 +0000 UTC m=+41.270714819" Feb 9 18:35:44.234168 env[1223]: 2024-02-09 18:35:44.174 [INFO][3921] k8s.go 578: Cleaning up netns ContainerID="45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a" Feb 9 18:35:44.234168 env[1223]: 2024-02-09 18:35:44.174 [INFO][3921] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a" iface="eth0" netns="/var/run/netns/cni-ca48dbe6-d939-c507-ca9a-ac0cb335b176" Feb 9 18:35:44.234168 env[1223]: 2024-02-09 18:35:44.174 [INFO][3921] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a" iface="eth0" netns="/var/run/netns/cni-ca48dbe6-d939-c507-ca9a-ac0cb335b176" Feb 9 18:35:44.234168 env[1223]: 2024-02-09 18:35:44.175 [INFO][3921] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a" iface="eth0" netns="/var/run/netns/cni-ca48dbe6-d939-c507-ca9a-ac0cb335b176" Feb 9 18:35:44.234168 env[1223]: 2024-02-09 18:35:44.175 [INFO][3921] k8s.go 585: Releasing IP address(es) ContainerID="45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a" Feb 9 18:35:44.234168 env[1223]: 2024-02-09 18:35:44.175 [INFO][3921] utils.go 188: Calico CNI releasing IP address ContainerID="45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a" Feb 9 18:35:44.234168 env[1223]: 2024-02-09 18:35:44.195 [INFO][3928] ipam_plugin.go 415: Releasing address using handleID ContainerID="45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a" HandleID="k8s-pod-network.45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a" Workload="localhost-k8s-calico--kube--controllers--6d59495b99--568dd-eth0" Feb 9 18:35:44.234168 env[1223]: 2024-02-09 18:35:44.195 [INFO][3928] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:35:44.234168 env[1223]: 2024-02-09 18:35:44.195 [INFO][3928] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:35:44.234168 env[1223]: 2024-02-09 18:35:44.211 [WARNING][3928] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a" HandleID="k8s-pod-network.45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a" Workload="localhost-k8s-calico--kube--controllers--6d59495b99--568dd-eth0" Feb 9 18:35:44.234168 env[1223]: 2024-02-09 18:35:44.211 [INFO][3928] ipam_plugin.go 443: Releasing address using workloadID ContainerID="45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a" HandleID="k8s-pod-network.45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a" Workload="localhost-k8s-calico--kube--controllers--6d59495b99--568dd-eth0" Feb 9 18:35:44.234168 env[1223]: 2024-02-09 18:35:44.216 [INFO][3928] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:35:44.234168 env[1223]: 2024-02-09 18:35:44.223 [INFO][3921] k8s.go 591: Teardown processing complete. ContainerID="45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a" Feb 9 18:35:44.234592 env[1223]: time="2024-02-09T18:35:44.234427582Z" level=info msg="TearDown network for sandbox \"45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a\" successfully" Feb 9 18:35:44.234592 env[1223]: time="2024-02-09T18:35:44.234461623Z" level=info msg="StopPodSandbox for \"45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a\" returns successfully" Feb 9 18:35:44.235972 env[1223]: time="2024-02-09T18:35:44.235941677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d59495b99-568dd,Uid:fe06e181-2fd6-42d6-8ad3-fb53734220fc,Namespace:calico-system,Attempt:1,}" Feb 9 18:35:44.241561 kubelet[2169]: I0209 18:35:44.240879 2169 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-6v8j8" podStartSLOduration=27.240835442 pod.CreationTimestamp="2024-02-09 18:35:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:35:44.229771979 +0000 UTC m=+41.279977426" watchObservedRunningTime="2024-02-09 18:35:44.240835442 +0000 UTC m=+41.291040889" Feb 9 18:35:44.278000 audit[3976]: NETFILTER_CFG table=filter:109 family=2 entries=14 op=nft_register_rule pid=3976 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:35:44.278000 audit[3976]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=ffffd506a1b0 a2=0 a3=ffff9a21a6c0 items=0 ppid=2354 pid=3976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:44.278000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:44.278000 audit[3976]: NETFILTER_CFG table=nat:110 family=2 entries=20 op=nft_register_rule pid=3976 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:35:44.278000 audit[3976]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffd506a1b0 a2=0 a3=ffff9a21a6c0 items=0 ppid=2354 pid=3976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:44.278000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:44.314000 audit[4009]: NETFILTER_CFG table=filter:111 family=2 entries=11 op=nft_register_rule pid=4009 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:35:44.314000 audit[4009]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffe3ce9580 a2=0 a3=fffface3e6c0 items=0 ppid=2354 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:44.314000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:44.322723 systemd[1]: run-netns-cni\x2dca48dbe6\x2dd939\x2dc507\x2dca9a\x2dac0cb335b176.mount: Deactivated successfully. Feb 9 18:35:44.324000 audit[4009]: NETFILTER_CFG table=nat:112 family=2 entries=53 op=nft_register_chain pid=4009 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:35:44.324000 audit[4009]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21492 a0=3 a1=ffffe3ce9580 a2=0 a3=fffface3e6c0 items=0 ppid=2354 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:44.324000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:44.357879 systemd-networkd[1099]: cali7da199293f8: Link UP Feb 9 18:35:44.358628 systemd-networkd[1099]: cali7da199293f8: Gained carrier Feb 9 18:35:44.359376 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali7da199293f8: link becomes ready Feb 9 18:35:44.370493 env[1223]: 2024-02-09 18:35:44.273 [INFO][3945] utils.go 100: File /var/lib/calico/mtu does not exist Feb 9 18:35:44.370493 env[1223]: 2024-02-09 18:35:44.284 [INFO][3945] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6d59495b99--568dd-eth0 calico-kube-controllers-6d59495b99- calico-system fe06e181-2fd6-42d6-8ad3-fb53734220fc 727 0 2024-02-09 18:35:22 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6d59495b99 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6d59495b99-568dd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7da199293f8 [] []}} ContainerID="6b6d7eca5eff49e85ec54039fa11398f54d6ff375a871d1d16939a3cfdff4668" Namespace="calico-system" Pod="calico-kube-controllers-6d59495b99-568dd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d59495b99--568dd-" Feb 9 18:35:44.370493 env[1223]: 2024-02-09 18:35:44.284 [INFO][3945] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="6b6d7eca5eff49e85ec54039fa11398f54d6ff375a871d1d16939a3cfdff4668" Namespace="calico-system" Pod="calico-kube-controllers-6d59495b99-568dd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d59495b99--568dd-eth0" Feb 9 18:35:44.370493 env[1223]: 2024-02-09 18:35:44.308 [INFO][3980] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6b6d7eca5eff49e85ec54039fa11398f54d6ff375a871d1d16939a3cfdff4668" HandleID="k8s-pod-network.6b6d7eca5eff49e85ec54039fa11398f54d6ff375a871d1d16939a3cfdff4668" Workload="localhost-k8s-calico--kube--controllers--6d59495b99--568dd-eth0" Feb 9 18:35:44.370493 env[1223]: 2024-02-09 18:35:44.329 [INFO][3980] ipam_plugin.go 268: Auto assigning IP ContainerID="6b6d7eca5eff49e85ec54039fa11398f54d6ff375a871d1d16939a3cfdff4668" HandleID="k8s-pod-network.6b6d7eca5eff49e85ec54039fa11398f54d6ff375a871d1d16939a3cfdff4668" Workload="localhost-k8s-calico--kube--controllers--6d59495b99--568dd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004dfb70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6d59495b99-568dd", "timestamp":"2024-02-09 18:35:44.308122435 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 18:35:44.370493 env[1223]: 2024-02-09 18:35:44.329 [INFO][3980] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:35:44.370493 env[1223]: 2024-02-09 18:35:44.329 [INFO][3980] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:35:44.370493 env[1223]: 2024-02-09 18:35:44.329 [INFO][3980] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 9 18:35:44.370493 env[1223]: 2024-02-09 18:35:44.333 [INFO][3980] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6b6d7eca5eff49e85ec54039fa11398f54d6ff375a871d1d16939a3cfdff4668" host="localhost" Feb 9 18:35:44.370493 env[1223]: 2024-02-09 18:35:44.336 [INFO][3980] ipam.go 372: Looking up existing affinities for host host="localhost" Feb 9 18:35:44.370493 env[1223]: 2024-02-09 18:35:44.339 [INFO][3980] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 9 18:35:44.370493 env[1223]: 2024-02-09 18:35:44.340 [INFO][3980] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 9 18:35:44.370493 env[1223]: 2024-02-09 18:35:44.342 [INFO][3980] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 9 18:35:44.370493 env[1223]: 2024-02-09 18:35:44.342 [INFO][3980] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6b6d7eca5eff49e85ec54039fa11398f54d6ff375a871d1d16939a3cfdff4668" host="localhost" Feb 9 18:35:44.370493 env[1223]: 2024-02-09 18:35:44.343 [INFO][3980] ipam.go 1682: Creating new handle: k8s-pod-network.6b6d7eca5eff49e85ec54039fa11398f54d6ff375a871d1d16939a3cfdff4668 Feb 9 18:35:44.370493 env[1223]: 2024-02-09 18:35:44.346 [INFO][3980] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6b6d7eca5eff49e85ec54039fa11398f54d6ff375a871d1d16939a3cfdff4668" host="localhost" Feb 9 18:35:44.370493 env[1223]: 2024-02-09 18:35:44.351 [INFO][3980] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.6b6d7eca5eff49e85ec54039fa11398f54d6ff375a871d1d16939a3cfdff4668" host="localhost" Feb 9 18:35:44.370493 env[1223]: 2024-02-09 18:35:44.351 [INFO][3980] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.6b6d7eca5eff49e85ec54039fa11398f54d6ff375a871d1d16939a3cfdff4668" host="localhost" Feb 9 18:35:44.370493 env[1223]: 2024-02-09 18:35:44.351 [INFO][3980] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:35:44.370493 env[1223]: 2024-02-09 18:35:44.351 [INFO][3980] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="6b6d7eca5eff49e85ec54039fa11398f54d6ff375a871d1d16939a3cfdff4668" HandleID="k8s-pod-network.6b6d7eca5eff49e85ec54039fa11398f54d6ff375a871d1d16939a3cfdff4668" Workload="localhost-k8s-calico--kube--controllers--6d59495b99--568dd-eth0" Feb 9 18:35:44.371023 env[1223]: 2024-02-09 18:35:44.353 [INFO][3945] k8s.go 385: Populated endpoint ContainerID="6b6d7eca5eff49e85ec54039fa11398f54d6ff375a871d1d16939a3cfdff4668" Namespace="calico-system" Pod="calico-kube-controllers-6d59495b99-568dd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d59495b99--568dd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d59495b99--568dd-eth0", GenerateName:"calico-kube-controllers-6d59495b99-", Namespace:"calico-system", SelfLink:"", UID:"fe06e181-2fd6-42d6-8ad3-fb53734220fc", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 35, 22, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d59495b99", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6d59495b99-568dd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7da199293f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:35:44.371023 env[1223]: 2024-02-09 18:35:44.353 [INFO][3945] k8s.go 386: Calico CNI using IPs: [192.168.88.131/32] ContainerID="6b6d7eca5eff49e85ec54039fa11398f54d6ff375a871d1d16939a3cfdff4668" Namespace="calico-system" Pod="calico-kube-controllers-6d59495b99-568dd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d59495b99--568dd-eth0" Feb 9 18:35:44.371023 env[1223]: 2024-02-09 18:35:44.353 [INFO][3945] dataplane_linux.go 68: Setting the host side veth name to cali7da199293f8 ContainerID="6b6d7eca5eff49e85ec54039fa11398f54d6ff375a871d1d16939a3cfdff4668" Namespace="calico-system" Pod="calico-kube-controllers-6d59495b99-568dd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d59495b99--568dd-eth0" Feb 9 18:35:44.371023 env[1223]: 2024-02-09 18:35:44.358 [INFO][3945] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="6b6d7eca5eff49e85ec54039fa11398f54d6ff375a871d1d16939a3cfdff4668" Namespace="calico-system" Pod="calico-kube-controllers-6d59495b99-568dd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d59495b99--568dd-eth0" Feb 9 18:35:44.371023 env[1223]: 2024-02-09 18:35:44.359 [INFO][3945] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="6b6d7eca5eff49e85ec54039fa11398f54d6ff375a871d1d16939a3cfdff4668" Namespace="calico-system" Pod="calico-kube-controllers-6d59495b99-568dd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d59495b99--568dd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d59495b99--568dd-eth0", GenerateName:"calico-kube-controllers-6d59495b99-", Namespace:"calico-system", SelfLink:"", UID:"fe06e181-2fd6-42d6-8ad3-fb53734220fc", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 35, 22, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d59495b99", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6b6d7eca5eff49e85ec54039fa11398f54d6ff375a871d1d16939a3cfdff4668", Pod:"calico-kube-controllers-6d59495b99-568dd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7da199293f8", MAC:"16:b9:b9:38:ef:fd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:35:44.371023 env[1223]: 2024-02-09 18:35:44.368 [INFO][3945] k8s.go 491: Wrote updated endpoint to datastore ContainerID="6b6d7eca5eff49e85ec54039fa11398f54d6ff375a871d1d16939a3cfdff4668" Namespace="calico-system" Pod="calico-kube-controllers-6d59495b99-568dd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d59495b99--568dd-eth0" Feb 9 18:35:44.382103 env[1223]: time="2024-02-09T18:35:44.381961368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:35:44.382103 env[1223]: time="2024-02-09T18:35:44.382023369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:35:44.382103 env[1223]: time="2024-02-09T18:35:44.382034249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:35:44.382333 env[1223]: time="2024-02-09T18:35:44.382266171Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6b6d7eca5eff49e85ec54039fa11398f54d6ff375a871d1d16939a3cfdff4668 pid=4038 runtime=io.containerd.runc.v2 Feb 9 18:35:44.400665 systemd[1]: run-containerd-runc-k8s.io-6b6d7eca5eff49e85ec54039fa11398f54d6ff375a871d1d16939a3cfdff4668-runc.4Yhv9V.mount: Deactivated successfully. Feb 9 18:35:44.454751 systemd-resolved[1156]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 18:35:44.475915 env[1223]: time="2024-02-09T18:35:44.475863650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d59495b99-568dd,Uid:fe06e181-2fd6-42d6-8ad3-fb53734220fc,Namespace:calico-system,Attempt:1,} returns sandbox id \"6b6d7eca5eff49e85ec54039fa11398f54d6ff375a871d1d16939a3cfdff4668\"" Feb 9 18:35:44.477523 env[1223]: time="2024-02-09T18:35:44.477496306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\"" Feb 9 18:35:44.811495 systemd-networkd[1099]: cali300d05296d6: Gained IPv6LL Feb 9 18:35:45.087882 env[1223]: time="2024-02-09T18:35:45.087778221Z" level=info msg="StopPodSandbox for \"769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c\"" Feb 9 18:35:45.159760 env[1223]: 2024-02-09 18:35:45.129 [INFO][4108] k8s.go 578: Cleaning up netns ContainerID="769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c" Feb 9 18:35:45.159760 env[1223]: 2024-02-09 18:35:45.129 [INFO][4108] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c" iface="eth0" netns="/var/run/netns/cni-8d186661-b7f0-b81d-22e7-2274681a4c84" Feb 9 18:35:45.159760 env[1223]: 2024-02-09 18:35:45.129 [INFO][4108] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c" iface="eth0" netns="/var/run/netns/cni-8d186661-b7f0-b81d-22e7-2274681a4c84" Feb 9 18:35:45.159760 env[1223]: 2024-02-09 18:35:45.130 [INFO][4108] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c" iface="eth0" netns="/var/run/netns/cni-8d186661-b7f0-b81d-22e7-2274681a4c84" Feb 9 18:35:45.159760 env[1223]: 2024-02-09 18:35:45.130 [INFO][4108] k8s.go 585: Releasing IP address(es) ContainerID="769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c" Feb 9 18:35:45.159760 env[1223]: 2024-02-09 18:35:45.130 [INFO][4108] utils.go 188: Calico CNI releasing IP address ContainerID="769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c" Feb 9 18:35:45.159760 env[1223]: 2024-02-09 18:35:45.145 [INFO][4117] ipam_plugin.go 415: Releasing address using handleID ContainerID="769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c" HandleID="k8s-pod-network.769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c" Workload="localhost-k8s-csi--node--driver--tbbzb-eth0" Feb 9 18:35:45.159760 env[1223]: 2024-02-09 18:35:45.146 [INFO][4117] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:35:45.159760 env[1223]: 2024-02-09 18:35:45.146 [INFO][4117] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:35:45.159760 env[1223]: 2024-02-09 18:35:45.154 [WARNING][4117] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c" HandleID="k8s-pod-network.769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c" Workload="localhost-k8s-csi--node--driver--tbbzb-eth0" Feb 9 18:35:45.159760 env[1223]: 2024-02-09 18:35:45.154 [INFO][4117] ipam_plugin.go 443: Releasing address using workloadID ContainerID="769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c" HandleID="k8s-pod-network.769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c" Workload="localhost-k8s-csi--node--driver--tbbzb-eth0" Feb 9 18:35:45.159760 env[1223]: 2024-02-09 18:35:45.156 [INFO][4117] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:35:45.159760 env[1223]: 2024-02-09 18:35:45.158 [INFO][4108] k8s.go 591: Teardown processing complete. ContainerID="769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c" Feb 9 18:35:45.161912 systemd[1]: run-netns-cni\x2d8d186661\x2db7f0\x2db81d\x2d22e7\x2d2274681a4c84.mount: Deactivated successfully. Feb 9 18:35:45.162807 env[1223]: time="2024-02-09T18:35:45.162757831Z" level=info msg="TearDown network for sandbox \"769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c\" successfully" Feb 9 18:35:45.162846 env[1223]: time="2024-02-09T18:35:45.162803072Z" level=info msg="StopPodSandbox for \"769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c\" returns successfully" Feb 9 18:35:45.163513 env[1223]: time="2024-02-09T18:35:45.163477878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tbbzb,Uid:85c81bc7-ead8-41f3-b0f6-13db63c2997b,Namespace:calico-system,Attempt:1,}" Feb 9 18:35:45.209541 kubelet[2169]: E0209 18:35:45.209514 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:45.209912 kubelet[2169]: E0209 18:35:45.209549 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:45.262125 systemd-networkd[1099]: cali8d00f4f0cee: Link UP Feb 9 18:35:45.263476 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 18:35:45.263544 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali8d00f4f0cee: link becomes ready Feb 9 18:35:45.264462 systemd-networkd[1099]: cali8d00f4f0cee: Gained carrier Feb 9 18:35:45.273917 env[1223]: 2024-02-09 18:35:45.190 [INFO][4124] utils.go 100: File /var/lib/calico/mtu does not exist Feb 9 18:35:45.273917 env[1223]: 2024-02-09 18:35:45.203 [INFO][4124] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--tbbzb-eth0 csi-node-driver- calico-system 85c81bc7-ead8-41f3-b0f6-13db63c2997b 753 0 2024-02-09 18:35:22 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7c77f88967 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-tbbzb eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali8d00f4f0cee [] []}} ContainerID="6b88367795627ec24daa5a2a11c847031f7a611dd253fe4f95a5c0fe0cb18ffe" Namespace="calico-system" Pod="csi-node-driver-tbbzb" WorkloadEndpoint="localhost-k8s-csi--node--driver--tbbzb-" Feb 9 18:35:45.273917 env[1223]: 2024-02-09 18:35:45.204 [INFO][4124] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="6b88367795627ec24daa5a2a11c847031f7a611dd253fe4f95a5c0fe0cb18ffe" Namespace="calico-system" Pod="csi-node-driver-tbbzb" WorkloadEndpoint="localhost-k8s-csi--node--driver--tbbzb-eth0" Feb 9 18:35:45.273917 env[1223]: 2024-02-09 18:35:45.226 [INFO][4138] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6b88367795627ec24daa5a2a11c847031f7a611dd253fe4f95a5c0fe0cb18ffe" HandleID="k8s-pod-network.6b88367795627ec24daa5a2a11c847031f7a611dd253fe4f95a5c0fe0cb18ffe" Workload="localhost-k8s-csi--node--driver--tbbzb-eth0" Feb 9 18:35:45.273917 env[1223]: 2024-02-09 18:35:45.238 [INFO][4138] ipam_plugin.go 268: Auto assigning IP ContainerID="6b88367795627ec24daa5a2a11c847031f7a611dd253fe4f95a5c0fe0cb18ffe" HandleID="k8s-pod-network.6b88367795627ec24daa5a2a11c847031f7a611dd253fe4f95a5c0fe0cb18ffe" Workload="localhost-k8s-csi--node--driver--tbbzb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400029daf0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-tbbzb", "timestamp":"2024-02-09 18:35:45.226945662 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 18:35:45.273917 env[1223]: 2024-02-09 18:35:45.238 [INFO][4138] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:35:45.273917 env[1223]: 2024-02-09 18:35:45.238 [INFO][4138] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:35:45.273917 env[1223]: 2024-02-09 18:35:45.238 [INFO][4138] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 9 18:35:45.273917 env[1223]: 2024-02-09 18:35:45.240 [INFO][4138] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6b88367795627ec24daa5a2a11c847031f7a611dd253fe4f95a5c0fe0cb18ffe" host="localhost" Feb 9 18:35:45.273917 env[1223]: 2024-02-09 18:35:45.243 [INFO][4138] ipam.go 372: Looking up existing affinities for host host="localhost" Feb 9 18:35:45.273917 env[1223]: 2024-02-09 18:35:45.246 [INFO][4138] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 9 18:35:45.273917 env[1223]: 2024-02-09 18:35:45.247 [INFO][4138] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 9 18:35:45.273917 env[1223]: 2024-02-09 18:35:45.249 [INFO][4138] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 9 18:35:45.273917 env[1223]: 2024-02-09 18:35:45.249 [INFO][4138] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6b88367795627ec24daa5a2a11c847031f7a611dd253fe4f95a5c0fe0cb18ffe" host="localhost" Feb 9 18:35:45.273917 env[1223]: 2024-02-09 18:35:45.251 [INFO][4138] ipam.go 1682: Creating new handle: k8s-pod-network.6b88367795627ec24daa5a2a11c847031f7a611dd253fe4f95a5c0fe0cb18ffe Feb 9 18:35:45.273917 env[1223]: 2024-02-09 18:35:45.254 [INFO][4138] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6b88367795627ec24daa5a2a11c847031f7a611dd253fe4f95a5c0fe0cb18ffe" host="localhost" Feb 9 18:35:45.273917 env[1223]: 2024-02-09 18:35:45.258 [INFO][4138] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.6b88367795627ec24daa5a2a11c847031f7a611dd253fe4f95a5c0fe0cb18ffe" host="localhost" Feb 9 18:35:45.273917 env[1223]: 2024-02-09 18:35:45.258 [INFO][4138] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.6b88367795627ec24daa5a2a11c847031f7a611dd253fe4f95a5c0fe0cb18ffe" host="localhost" Feb 9 18:35:45.273917 env[1223]: 2024-02-09 18:35:45.258 [INFO][4138] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:35:45.273917 env[1223]: 2024-02-09 18:35:45.258 [INFO][4138] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="6b88367795627ec24daa5a2a11c847031f7a611dd253fe4f95a5c0fe0cb18ffe" HandleID="k8s-pod-network.6b88367795627ec24daa5a2a11c847031f7a611dd253fe4f95a5c0fe0cb18ffe" Workload="localhost-k8s-csi--node--driver--tbbzb-eth0" Feb 9 18:35:45.274579 env[1223]: 2024-02-09 18:35:45.260 [INFO][4124] k8s.go 385: Populated endpoint ContainerID="6b88367795627ec24daa5a2a11c847031f7a611dd253fe4f95a5c0fe0cb18ffe" Namespace="calico-system" Pod="csi-node-driver-tbbzb" WorkloadEndpoint="localhost-k8s-csi--node--driver--tbbzb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tbbzb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"85c81bc7-ead8-41f3-b0f6-13db63c2997b", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 35, 22, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-tbbzb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali8d00f4f0cee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:35:45.274579 env[1223]: 2024-02-09 18:35:45.260 [INFO][4124] k8s.go 386: Calico CNI using IPs: [192.168.88.132/32] ContainerID="6b88367795627ec24daa5a2a11c847031f7a611dd253fe4f95a5c0fe0cb18ffe" Namespace="calico-system" Pod="csi-node-driver-tbbzb" WorkloadEndpoint="localhost-k8s-csi--node--driver--tbbzb-eth0" Feb 9 18:35:45.274579 env[1223]: 2024-02-09 18:35:45.261 [INFO][4124] dataplane_linux.go 68: Setting the host side veth name to cali8d00f4f0cee ContainerID="6b88367795627ec24daa5a2a11c847031f7a611dd253fe4f95a5c0fe0cb18ffe" Namespace="calico-system" Pod="csi-node-driver-tbbzb" WorkloadEndpoint="localhost-k8s-csi--node--driver--tbbzb-eth0" Feb 9 18:35:45.274579 env[1223]: 2024-02-09 18:35:45.263 [INFO][4124] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="6b88367795627ec24daa5a2a11c847031f7a611dd253fe4f95a5c0fe0cb18ffe" Namespace="calico-system" Pod="csi-node-driver-tbbzb" WorkloadEndpoint="localhost-k8s-csi--node--driver--tbbzb-eth0" Feb 9 18:35:45.274579 env[1223]: 2024-02-09 18:35:45.263 [INFO][4124] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="6b88367795627ec24daa5a2a11c847031f7a611dd253fe4f95a5c0fe0cb18ffe" Namespace="calico-system" Pod="csi-node-driver-tbbzb" WorkloadEndpoint="localhost-k8s-csi--node--driver--tbbzb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tbbzb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"85c81bc7-ead8-41f3-b0f6-13db63c2997b", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 35, 22, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6b88367795627ec24daa5a2a11c847031f7a611dd253fe4f95a5c0fe0cb18ffe", Pod:"csi-node-driver-tbbzb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali8d00f4f0cee", MAC:"da:7c:7b:5e:f8:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:35:45.274579 env[1223]: 2024-02-09 18:35:45.272 [INFO][4124] k8s.go 491: Wrote updated endpoint to datastore ContainerID="6b88367795627ec24daa5a2a11c847031f7a611dd253fe4f95a5c0fe0cb18ffe" Namespace="calico-system" Pod="csi-node-driver-tbbzb" WorkloadEndpoint="localhost-k8s-csi--node--driver--tbbzb-eth0" Feb 9 18:35:45.284102 env[1223]: time="2024-02-09T18:35:45.284013186Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:35:45.284102 env[1223]: time="2024-02-09T18:35:45.284058347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:35:45.284102 env[1223]: time="2024-02-09T18:35:45.284077307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:35:45.286595 env[1223]: time="2024-02-09T18:35:45.286537130Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6b88367795627ec24daa5a2a11c847031f7a611dd253fe4f95a5c0fe0cb18ffe pid=4163 runtime=io.containerd.runc.v2 Feb 9 18:35:45.323433 systemd-resolved[1156]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 18:35:45.333674 env[1223]: time="2024-02-09T18:35:45.333638283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tbbzb,Uid:85c81bc7-ead8-41f3-b0f6-13db63c2997b,Namespace:calico-system,Attempt:1,} returns sandbox id \"6b88367795627ec24daa5a2a11c847031f7a611dd253fe4f95a5c0fe0cb18ffe\"" Feb 9 18:35:45.387762 systemd-networkd[1099]: cali303a35b8668: Gained IPv6LL Feb 9 18:35:45.451670 systemd-networkd[1099]: cali7da199293f8: Gained IPv6LL Feb 9 18:35:45.606030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount113096442.mount: Deactivated successfully. Feb 9 18:35:46.212007 kubelet[2169]: E0209 18:35:46.211972 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:46.213420 kubelet[2169]: E0209 18:35:46.212708 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:46.297820 env[1223]: time="2024-02-09T18:35:46.297776377Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:46.299044 env[1223]: time="2024-02-09T18:35:46.299018388Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:094645649618376e48b5ec13a94a164d53dbdf819b7ab644f080b751f24560c8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:46.301921 env[1223]: time="2024-02-09T18:35:46.301893414Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:46.304588 env[1223]: time="2024-02-09T18:35:46.304557478Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:e264ab1fb2f1ae90dd1d84e226d11d2eb4350e74ac27de4c65f29f5aadba5bb1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:46.305226 env[1223]: time="2024-02-09T18:35:46.305198004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\" returns image reference \"sha256:094645649618376e48b5ec13a94a164d53dbdf819b7ab644f080b751f24560c8\"" Feb 9 18:35:46.306306 env[1223]: time="2024-02-09T18:35:46.306279494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\"" Feb 9 18:35:46.320695 env[1223]: time="2024-02-09T18:35:46.320655583Z" level=info msg="CreateContainer within sandbox \"6b6d7eca5eff49e85ec54039fa11398f54d6ff375a871d1d16939a3cfdff4668\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 9 18:35:46.333186 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount859926851.mount: Deactivated successfully. Feb 9 18:35:46.336627 env[1223]: time="2024-02-09T18:35:46.336593527Z" level=info msg="CreateContainer within sandbox \"6b6d7eca5eff49e85ec54039fa11398f54d6ff375a871d1d16939a3cfdff4668\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d753e69c050c3b2ea477858df693c512adf63d5e84e7be92da1b270b847277dd\"" Feb 9 18:35:46.337191 env[1223]: time="2024-02-09T18:35:46.337035531Z" level=info msg="StartContainer for \"d753e69c050c3b2ea477858df693c512adf63d5e84e7be92da1b270b847277dd\"" Feb 9 18:35:46.479281 env[1223]: time="2024-02-09T18:35:46.479187012Z" level=info msg="StartContainer for \"d753e69c050c3b2ea477858df693c512adf63d5e84e7be92da1b270b847277dd\" returns successfully" Feb 9 18:35:46.859535 systemd-networkd[1099]: cali8d00f4f0cee: Gained IPv6LL Feb 9 18:35:47.275033 kubelet[2169]: I0209 18:35:47.275011 2169 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6d59495b99-568dd" podStartSLOduration=-9.223372011579807e+09 pod.CreationTimestamp="2024-02-09 18:35:22 +0000 UTC" firstStartedPulling="2024-02-09 18:35:44.477065062 +0000 UTC m=+41.527270469" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:35:47.225016938 +0000 UTC m=+44.275222385" watchObservedRunningTime="2024-02-09 18:35:47.274969459 +0000 UTC m=+44.325174866" Feb 9 18:35:47.461430 env[1223]: time="2024-02-09T18:35:47.461388108Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:47.463619 env[1223]: time="2024-02-09T18:35:47.463582247Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4b71e7439e0eba34a97844591560a009f37e8e6c17a386a34d416c1cc872dee8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:47.464952 env[1223]: time="2024-02-09T18:35:47.464923339Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:47.466421 env[1223]: time="2024-02-09T18:35:47.466394272Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:2b9021393c17e87ba8a3c89f5b3719941812f4e4751caa0b71eb2233bff48738,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:47.467134 env[1223]: time="2024-02-09T18:35:47.467093758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\" returns image reference \"sha256:4b71e7439e0eba34a97844591560a009f37e8e6c17a386a34d416c1cc872dee8\"" Feb 9 18:35:47.468877 env[1223]: time="2024-02-09T18:35:47.468851174Z" level=info msg="CreateContainer within sandbox \"6b88367795627ec24daa5a2a11c847031f7a611dd253fe4f95a5c0fe0cb18ffe\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 9 18:35:47.481713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4152850327.mount: Deactivated successfully. Feb 9 18:35:47.482556 env[1223]: time="2024-02-09T18:35:47.482188172Z" level=info msg="CreateContainer within sandbox \"6b88367795627ec24daa5a2a11c847031f7a611dd253fe4f95a5c0fe0cb18ffe\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8a833eb809c9164d6b939f655cd0e759df2ec5df2811677adef1a7ba25939be4\"" Feb 9 18:35:47.482620 env[1223]: time="2024-02-09T18:35:47.482593735Z" level=info msg="StartContainer for \"8a833eb809c9164d6b939f655cd0e759df2ec5df2811677adef1a7ba25939be4\"" Feb 9 18:35:47.558528 env[1223]: time="2024-02-09T18:35:47.558411966Z" level=info msg="StartContainer for \"8a833eb809c9164d6b939f655cd0e759df2ec5df2811677adef1a7ba25939be4\" returns successfully" Feb 9 18:35:47.559365 env[1223]: time="2024-02-09T18:35:47.559324854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\"" Feb 9 18:35:48.780830 env[1223]: time="2024-02-09T18:35:48.780775490Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:48.782223 env[1223]: time="2024-02-09T18:35:48.782192983Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9dbda087e98c46610fb8629cf530f1fe49eee4b17d2afe455664ca446ec39d43,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:48.783742 env[1223]: time="2024-02-09T18:35:48.783717676Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:48.784933 env[1223]: time="2024-02-09T18:35:48.784907046Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:45a7aba6020a7cf7b866cb8a8d481b30c97e9b3407e1459aaa65a5b4cc06633a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:48.785278 env[1223]: time="2024-02-09T18:35:48.785251449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\" returns image reference \"sha256:9dbda087e98c46610fb8629cf530f1fe49eee4b17d2afe455664ca446ec39d43\"" Feb 9 18:35:48.786837 env[1223]: time="2024-02-09T18:35:48.786804903Z" level=info msg="CreateContainer within sandbox \"6b88367795627ec24daa5a2a11c847031f7a611dd253fe4f95a5c0fe0cb18ffe\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 9 18:35:48.798833 env[1223]: time="2024-02-09T18:35:48.798796767Z" level=info msg="CreateContainer within sandbox \"6b88367795627ec24daa5a2a11c847031f7a611dd253fe4f95a5c0fe0cb18ffe\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"377fb4a84215edd2572e1b3871c85dbe8811ecff68b9cec6776048dc1442dc22\"" Feb 9 18:35:48.799448 env[1223]: time="2024-02-09T18:35:48.799418652Z" level=info msg="StartContainer for \"377fb4a84215edd2572e1b3871c85dbe8811ecff68b9cec6776048dc1442dc22\"" Feb 9 18:35:48.878958 env[1223]: time="2024-02-09T18:35:48.878915542Z" level=info msg="StartContainer for \"377fb4a84215edd2572e1b3871c85dbe8811ecff68b9cec6776048dc1442dc22\" returns successfully" Feb 9 18:35:49.120308 kubelet[2169]: I0209 18:35:49.120231 2169 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 9 18:35:49.120965 kubelet[2169]: I0209 18:35:49.120952 2169 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 9 18:35:49.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.89:22-10.0.0.1:49856 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:49.187369 systemd[1]: Started sshd@8-10.0.0.89:22-10.0.0.1:49856.service. Feb 9 18:35:49.190583 kernel: kauditd_printk_skb: 13 callbacks suppressed Feb 9 18:35:49.190688 kernel: audit: type=1130 audit(1707503749.186:296): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.89:22-10.0.0.1:49856 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:49.230522 kubelet[2169]: I0209 18:35:49.230484 2169 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-tbbzb" podStartSLOduration=-9.223372009624329e+09 pod.CreationTimestamp="2024-02-09 18:35:22 +0000 UTC" firstStartedPulling="2024-02-09 18:35:45.334706053 +0000 UTC m=+42.384911460" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:35:49.229835795 +0000 UTC m=+46.280041242" watchObservedRunningTime="2024-02-09 18:35:49.23044788 +0000 UTC m=+46.280653327" Feb 9 18:35:49.230000 audit[4435]: USER_ACCT pid=4435 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:49.232550 sshd[4435]: Accepted publickey for core from 10.0.0.1 port 49856 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:35:49.233934 sshd[4435]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:35:49.232000 audit[4435]: CRED_ACQ pid=4435 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:49.237389 kernel: audit: type=1101 audit(1707503749.230:297): pid=4435 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:49.237457 kernel: audit: type=1103 audit(1707503749.232:298): pid=4435 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:49.237481 kernel: audit: type=1006 audit(1707503749.232:299): pid=4435 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Feb 9 18:35:49.238677 kernel: audit: type=1300 audit(1707503749.232:299): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffedbbbe60 a2=3 a3=1 items=0 ppid=1 pid=4435 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:49.232000 audit[4435]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffedbbbe60 a2=3 a3=1 items=0 ppid=1 pid=4435 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:49.237658 systemd-logind[1201]: New session 9 of user core. Feb 9 18:35:49.238521 systemd[1]: Started session-9.scope. Feb 9 18:35:49.232000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 18:35:49.242405 kernel: audit: type=1327 audit(1707503749.232:299): proctitle=737368643A20636F7265205B707269765D Feb 9 18:35:49.241000 audit[4435]: USER_START pid=4435 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:49.241000 audit[4438]: CRED_ACQ pid=4438 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:49.248288 kernel: audit: type=1105 audit(1707503749.241:300): pid=4435 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:49.248332 kernel: audit: type=1103 audit(1707503749.241:301): pid=4438 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:49.360326 sshd[4435]: pam_unix(sshd:session): session closed for user core Feb 9 18:35:49.359000 audit[4435]: USER_END pid=4435 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:49.362630 systemd[1]: sshd@8-10.0.0.89:22-10.0.0.1:49856.service: Deactivated successfully. Feb 9 18:35:49.363740 systemd-logind[1201]: Session 9 logged out. Waiting for processes to exit. Feb 9 18:35:49.363800 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 18:35:49.360000 audit[4435]: CRED_DISP pid=4435 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:49.364620 systemd-logind[1201]: Removed session 9. Feb 9 18:35:49.365944 kernel: audit: type=1106 audit(1707503749.359:302): pid=4435 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:49.366007 kernel: audit: type=1104 audit(1707503749.360:303): pid=4435 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:49.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.89:22-10.0.0.1:49856 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:52.035871 kubelet[2169]: I0209 18:35:52.035821 2169 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 9 18:35:52.036470 kubelet[2169]: E0209 18:35:52.036455 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:52.084000 audit[4549]: NETFILTER_CFG table=filter:113 family=2 entries=7 op=nft_register_rule pid=4549 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:35:52.084000 audit[4549]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=fffff78b5fd0 a2=0 a3=ffffb33726c0 items=0 ppid=2354 pid=4549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:52.084000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:52.085000 audit[4549]: NETFILTER_CFG table=nat:114 family=2 entries=75 op=nft_register_chain pid=4549 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:35:52.085000 audit[4549]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=fffff78b5fd0 a2=0 a3=ffffb33726c0 items=0 ppid=2354 pid=4549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:52.085000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:52.226243 kubelet[2169]: E0209 18:35:52.226215 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:52.593000 audit[4596]: AVC avc: denied { bpf } for pid=4596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:35:52.593000 audit[4596]: AVC avc: denied { bpf } for pid=4596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:35:52.593000 audit[4596]: AVC avc: denied { perfmon } for pid=4596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:35:52.593000 audit[4596]: AVC avc: denied { perfmon } for pid=4596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:35:52.593000 audit[4596]: AVC avc: denied { perfmon } for pid=4596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:35:52.593000 audit[4596]: AVC avc: denied { perfmon } for pid=4596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:35:52.593000 audit[4596]: AVC avc: denied { perfmon } for pid=4596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:35:52.593000 audit[4596]: AVC avc: denied { bpf } for pid=4596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:35:52.593000 audit[4596]: AVC avc: denied { bpf } for pid=4596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:35:52.593000 audit: BPF prog-id=10 op=LOAD Feb 9 18:35:52.593000 audit[4596]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffef5213a8 a2=70 a3=0 items=0 ppid=4552 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:52.593000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 18:35:52.594000 audit: BPF prog-id=10 op=UNLOAD Feb 9 18:35:52.594000 audit[4596]: AVC avc: denied { bpf } for pid=4596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:35:52.594000 audit[4596]: AVC avc: denied { bpf } for pid=4596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:35:52.594000 audit[4596]: AVC avc: denied { perfmon } for pid=4596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:35:52.594000 audit[4596]: AVC avc: denied { perfmon } for pid=4596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:35:52.594000 audit[4596]: AVC avc: denied { perfmon } for pid=4596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:35:52.594000 audit[4596]: AVC avc: denied { perfmon } for pid=4596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:35:52.594000 audit[4596]: AVC avc: denied { perfmon } for pid=4596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:35:52.594000 audit[4596]: AVC avc: denied { bpf } for pid=4596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:35:52.594000 audit[4596]: AVC avc: denied { bpf } for pid=4596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:35:52.594000 audit: BPF prog-id=11 op=LOAD Feb 9 18:35:52.594000 audit[4596]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffef5213a8 a2=70 a3=4a174c items=0 ppid=4552 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:52.594000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 18:35:52.594000 audit: BPF prog-id=11 op=UNLOAD Feb 9 18:35:52.594000 audit[4596]: AVC avc: denied { bpf } for pid=4596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:35:52.594000 audit[4596]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=0 a1=ffffef5213d8 a2=70 a3=3d85579f items=0 ppid=4552 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:52.594000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 18:35:52.594000 audit[4596]: AVC avc: denied { bpf } for pid=4596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:35:52.594000 audit[4596]: AVC avc: denied { bpf } for pid=4596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:35:52.594000 audit[4596]: AVC avc: denied { bpf } for pid=4596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:35:52.594000 audit[4596]: AVC avc: denied { perfmon } for pid=4596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:35:52.594000 audit[4596]: AVC avc: denied { perfmon } for pid=4596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:35:52.594000 audit[4596]: AVC avc: denied { perfmon } for pid=4596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:35:52.594000 audit[4596]: AVC avc: denied { perfmon } for pid=4596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:35:52.594000 audit[4596]: AVC avc: denied { perfmon } for pid=4596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:35:52.594000 audit[4596]: AVC avc: denied { bpf } for pid=4596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:35:52.594000 audit[4596]: AVC avc: denied { bpf } for pid=4596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:35:52.594000 audit: BPF prog-id=12 op=LOAD Feb 9 18:35:52.594000 audit[4596]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffef521328 a2=70 a3=3d8557b9 items=0 ppid=4552 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:52.594000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 18:35:52.598000 audit[4600]: AVC avc: denied { bpf } for pid=4600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:35:52.598000 audit[4600]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=fffffa2c3688 a2=70 a3=0 items=0 ppid=4552 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:52.598000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 9 18:35:52.598000 audit[4600]: AVC avc: denied { bpf } for pid=4600 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:35:52.598000 audit[4600]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=fffffa2c3568 a2=70 a3=2 items=0 ppid=4552 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:52.598000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 9 18:35:52.607000 audit: BPF prog-id=12 op=UNLOAD Feb 9 18:35:52.653000 audit[4631]: NETFILTER_CFG table=mangle:115 family=2 entries=19 op=nft_register_chain pid=4631 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 18:35:52.653000 audit[4631]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6800 a0=3 a1=ffffdcf1b850 a2=0 a3=ffffb464bfa8 items=0 ppid=4552 pid=4631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:52.653000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 18:35:52.655000 audit[4630]: NETFILTER_CFG table=raw:116 family=2 entries=19 op=nft_register_chain pid=4630 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 18:35:52.655000 audit[4630]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6132 a0=3 a1=ffffc44c8630 a2=0 a3=ffff887ccfa8 items=0 ppid=4552 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:52.655000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 18:35:52.662000 audit[4633]: NETFILTER_CFG table=nat:117 family=2 entries=16 op=nft_register_chain pid=4633 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 18:35:52.662000 audit[4633]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5188 a0=3 a1=ffffd870c040 a2=0 a3=ffffbc8a6fa8 items=0 ppid=4552 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:52.662000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 18:35:52.665000 audit[4636]: NETFILTER_CFG table=filter:118 family=2 entries=157 op=nft_register_chain pid=4636 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 18:35:52.665000 audit[4636]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=86820 a0=3 a1=fffffc010220 a2=0 a3=ffff821a3fa8 items=0 ppid=4552 pid=4636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:52.665000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 18:35:53.495307 systemd-networkd[1099]: vxlan.calico: Link UP Feb 9 18:35:53.495315 systemd-networkd[1099]: vxlan.calico: Gained carrier Feb 9 18:35:54.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.89:22-10.0.0.1:45436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:54.363753 systemd[1]: Started sshd@9-10.0.0.89:22-10.0.0.1:45436.service. Feb 9 18:35:54.364435 kernel: kauditd_printk_skb: 68 callbacks suppressed Feb 9 18:35:54.364487 kernel: audit: type=1130 audit(1707503754.362:320): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.89:22-10.0.0.1:45436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:54.411000 audit[4689]: USER_ACCT pid=4689 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:54.412636 sshd[4689]: Accepted publickey for core from 10.0.0.1 port 45436 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:35:54.413000 audit[4689]: CRED_ACQ pid=4689 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:54.415764 sshd[4689]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:35:54.417526 kernel: audit: type=1101 audit(1707503754.411:321): pid=4689 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:54.417590 kernel: audit: type=1103 audit(1707503754.413:322): pid=4689 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:54.417613 kernel: audit: type=1006 audit(1707503754.413:323): pid=4689 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Feb 9 18:35:54.413000 audit[4689]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffa3d7480 a2=3 a3=1 items=0 ppid=1 pid=4689 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:54.420306 systemd[1]: Started session-10.scope. Feb 9 18:35:54.420508 systemd-logind[1201]: New session 10 of user core. Feb 9 18:35:54.421261 kernel: audit: type=1300 audit(1707503754.413:323): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffa3d7480 a2=3 a3=1 items=0 ppid=1 pid=4689 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:54.421316 kernel: audit: type=1327 audit(1707503754.413:323): proctitle=737368643A20636F7265205B707269765D Feb 9 18:35:54.413000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 18:35:54.422000 audit[4689]: USER_START pid=4689 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:54.424000 audit[4692]: CRED_ACQ pid=4692 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:54.428811 kernel: audit: type=1105 audit(1707503754.422:324): pid=4689 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:54.428848 kernel: audit: type=1103 audit(1707503754.424:325): pid=4692 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:54.552934 sshd[4689]: pam_unix(sshd:session): session closed for user core Feb 9 18:35:54.552000 audit[4689]: USER_END pid=4689 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:54.555484 systemd-logind[1201]: Session 10 logged out. Waiting for processes to exit. Feb 9 18:35:54.555697 systemd[1]: sshd@9-10.0.0.89:22-10.0.0.1:45436.service: Deactivated successfully. Feb 9 18:35:54.556538 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 18:35:54.552000 audit[4689]: CRED_DISP pid=4689 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:54.556945 systemd-logind[1201]: Removed session 10. Feb 9 18:35:54.558747 kernel: audit: type=1106 audit(1707503754.552:326): pid=4689 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:54.558817 kernel: audit: type=1104 audit(1707503754.552:327): pid=4689 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:54.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.89:22-10.0.0.1:45436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:55.051541 systemd-networkd[1099]: vxlan.calico: Gained IPv6LL Feb 9 18:35:56.465683 systemd[1]: run-containerd-runc-k8s.io-ff74826e677b5e6a7386f16c2ab84443ed7382c9c98cc76278fd2a0ed8f4d539-runc.MDadpC.mount: Deactivated successfully. Feb 9 18:35:56.555576 kubelet[2169]: E0209 18:35:56.555114 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:59.556478 systemd[1]: Started sshd@10-10.0.0.89:22-10.0.0.1:45452.service. Feb 9 18:35:59.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.89:22-10.0.0.1:45452 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:59.559549 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 18:35:59.559631 kernel: audit: type=1130 audit(1707503759.556:329): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.89:22-10.0.0.1:45452 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:59.596000 audit[4735]: USER_ACCT pid=4735 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:59.596979 sshd[4735]: Accepted publickey for core from 10.0.0.1 port 45452 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:35:59.598336 sshd[4735]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:35:59.597000 audit[4735]: CRED_ACQ pid=4735 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:59.601319 kernel: audit: type=1101 audit(1707503759.596:330): pid=4735 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:59.601392 kernel: audit: type=1103 audit(1707503759.597:331): pid=4735 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:59.602781 kernel: audit: type=1006 audit(1707503759.597:332): pid=4735 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Feb 9 18:35:59.602847 kernel: audit: type=1300 audit(1707503759.597:332): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff238b310 a2=3 a3=1 items=0 ppid=1 pid=4735 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:59.597000 audit[4735]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff238b310 a2=3 a3=1 items=0 ppid=1 pid=4735 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:59.603635 systemd-logind[1201]: New session 11 of user core. Feb 9 18:35:59.604518 systemd[1]: Started session-11.scope. Feb 9 18:35:59.597000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 18:35:59.605979 kernel: audit: type=1327 audit(1707503759.597:332): proctitle=737368643A20636F7265205B707269765D Feb 9 18:35:59.607000 audit[4735]: USER_START pid=4735 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:59.608000 audit[4738]: CRED_ACQ pid=4738 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:59.614345 kernel: audit: type=1105 audit(1707503759.607:333): pid=4735 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:59.614438 kernel: audit: type=1103 audit(1707503759.608:334): pid=4738 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:59.711705 sshd[4735]: pam_unix(sshd:session): session closed for user core Feb 9 18:35:59.712000 audit[4735]: USER_END pid=4735 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:59.714304 systemd[1]: Started sshd@11-10.0.0.89:22-10.0.0.1:45456.service. Feb 9 18:35:59.712000 audit[4735]: CRED_DISP pid=4735 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:59.715544 systemd[1]: sshd@10-10.0.0.89:22-10.0.0.1:45452.service: Deactivated successfully. Feb 9 18:35:59.716673 systemd-logind[1201]: Session 11 logged out. Waiting for processes to exit. Feb 9 18:35:59.716750 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 18:35:59.717658 kernel: audit: type=1106 audit(1707503759.712:335): pid=4735 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:59.717716 kernel: audit: type=1104 audit(1707503759.712:336): pid=4735 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:59.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.89:22-10.0.0.1:45456 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:59.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.89:22-10.0.0.1:45452 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:59.717493 systemd-logind[1201]: Removed session 11. Feb 9 18:35:59.756000 audit[4748]: USER_ACCT pid=4748 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:59.757395 sshd[4748]: Accepted publickey for core from 10.0.0.1 port 45456 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:35:59.758000 audit[4748]: CRED_ACQ pid=4748 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:59.758000 audit[4748]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffeddcb1e0 a2=3 a3=1 items=0 ppid=1 pid=4748 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:59.758000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 18:35:59.758748 sshd[4748]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:35:59.761862 systemd-logind[1201]: New session 12 of user core. Feb 9 18:35:59.762864 systemd[1]: Started session-12.scope. Feb 9 18:35:59.765000 audit[4748]: USER_START pid=4748 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:59.766000 audit[4753]: CRED_ACQ pid=4753 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:00.001620 sshd[4748]: pam_unix(sshd:session): session closed for user core Feb 9 18:36:00.008904 systemd[1]: Started sshd@12-10.0.0.89:22-10.0.0.1:45464.service. Feb 9 18:36:00.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.89:22-10.0.0.1:45464 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:00.011000 audit[4748]: USER_END pid=4748 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:00.011000 audit[4748]: CRED_DISP pid=4748 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:00.020300 systemd[1]: sshd@11-10.0.0.89:22-10.0.0.1:45456.service: Deactivated successfully. Feb 9 18:36:00.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.89:22-10.0.0.1:45456 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:00.023580 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 18:36:00.024232 systemd-logind[1201]: Session 12 logged out. Waiting for processes to exit. Feb 9 18:36:00.025163 systemd-logind[1201]: Removed session 12. Feb 9 18:36:00.057000 audit[4760]: USER_ACCT pid=4760 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:00.057901 sshd[4760]: Accepted publickey for core from 10.0.0.1 port 45464 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:36:00.058000 audit[4760]: CRED_ACQ pid=4760 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:00.058000 audit[4760]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcf430d90 a2=3 a3=1 items=0 ppid=1 pid=4760 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:00.058000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 18:36:00.058987 sshd[4760]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:36:00.062865 systemd[1]: Started session-13.scope. Feb 9 18:36:00.063052 systemd-logind[1201]: New session 13 of user core. Feb 9 18:36:00.066000 audit[4760]: USER_START pid=4760 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:00.067000 audit[4765]: CRED_ACQ pid=4765 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:00.173035 sshd[4760]: pam_unix(sshd:session): session closed for user core Feb 9 18:36:00.173000 audit[4760]: USER_END pid=4760 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:00.173000 audit[4760]: CRED_DISP pid=4760 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:00.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.89:22-10.0.0.1:45464 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:00.175374 systemd[1]: sshd@12-10.0.0.89:22-10.0.0.1:45464.service: Deactivated successfully. Feb 9 18:36:00.176598 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 18:36:00.176938 systemd-logind[1201]: Session 13 logged out. Waiting for processes to exit. Feb 9 18:36:00.177704 systemd-logind[1201]: Removed session 13. Feb 9 18:36:03.017015 env[1223]: time="2024-02-09T18:36:03.016975092Z" level=info msg="StopPodSandbox for \"769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c\"" Feb 9 18:36:03.082047 env[1223]: 2024-02-09 18:36:03.050 [WARNING][4803] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tbbzb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"85c81bc7-ead8-41f3-b0f6-13db63c2997b", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 35, 22, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6b88367795627ec24daa5a2a11c847031f7a611dd253fe4f95a5c0fe0cb18ffe", Pod:"csi-node-driver-tbbzb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali8d00f4f0cee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:36:03.082047 env[1223]: 2024-02-09 18:36:03.050 [INFO][4803] k8s.go 578: Cleaning up netns ContainerID="769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c" Feb 9 18:36:03.082047 env[1223]: 2024-02-09 18:36:03.050 [INFO][4803] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c" iface="eth0" netns="" Feb 9 18:36:03.082047 env[1223]: 2024-02-09 18:36:03.051 [INFO][4803] k8s.go 585: Releasing IP address(es) ContainerID="769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c" Feb 9 18:36:03.082047 env[1223]: 2024-02-09 18:36:03.051 [INFO][4803] utils.go 188: Calico CNI releasing IP address ContainerID="769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c" Feb 9 18:36:03.082047 env[1223]: 2024-02-09 18:36:03.067 [INFO][4811] ipam_plugin.go 415: Releasing address using handleID ContainerID="769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c" HandleID="k8s-pod-network.769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c" Workload="localhost-k8s-csi--node--driver--tbbzb-eth0" Feb 9 18:36:03.082047 env[1223]: 2024-02-09 18:36:03.067 [INFO][4811] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:36:03.082047 env[1223]: 2024-02-09 18:36:03.067 [INFO][4811] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:36:03.082047 env[1223]: 2024-02-09 18:36:03.076 [WARNING][4811] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c" HandleID="k8s-pod-network.769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c" Workload="localhost-k8s-csi--node--driver--tbbzb-eth0" Feb 9 18:36:03.082047 env[1223]: 2024-02-09 18:36:03.076 [INFO][4811] ipam_plugin.go 443: Releasing address using workloadID ContainerID="769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c" HandleID="k8s-pod-network.769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c" Workload="localhost-k8s-csi--node--driver--tbbzb-eth0" Feb 9 18:36:03.082047 env[1223]: 2024-02-09 18:36:03.078 [INFO][4811] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:36:03.082047 env[1223]: 2024-02-09 18:36:03.080 [INFO][4803] k8s.go 591: Teardown processing complete. ContainerID="769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c" Feb 9 18:36:03.082496 env[1223]: time="2024-02-09T18:36:03.082078240Z" level=info msg="TearDown network for sandbox \"769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c\" successfully" Feb 9 18:36:03.082496 env[1223]: time="2024-02-09T18:36:03.082109400Z" level=info msg="StopPodSandbox for \"769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c\" returns successfully" Feb 9 18:36:03.083083 env[1223]: time="2024-02-09T18:36:03.083054967Z" level=info msg="RemovePodSandbox for \"769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c\"" Feb 9 18:36:03.083146 env[1223]: time="2024-02-09T18:36:03.083092807Z" level=info msg="Forcibly stopping sandbox \"769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c\"" Feb 9 18:36:03.151715 env[1223]: 2024-02-09 18:36:03.121 [WARNING][4836] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tbbzb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"85c81bc7-ead8-41f3-b0f6-13db63c2997b", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 35, 22, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6b88367795627ec24daa5a2a11c847031f7a611dd253fe4f95a5c0fe0cb18ffe", Pod:"csi-node-driver-tbbzb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali8d00f4f0cee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:36:03.151715 env[1223]: 2024-02-09 18:36:03.121 [INFO][4836] k8s.go 578: Cleaning up netns ContainerID="769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c" Feb 9 18:36:03.151715 env[1223]: 2024-02-09 18:36:03.121 [INFO][4836] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c" iface="eth0" netns="" Feb 9 18:36:03.151715 env[1223]: 2024-02-09 18:36:03.121 [INFO][4836] k8s.go 585: Releasing IP address(es) ContainerID="769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c" Feb 9 18:36:03.151715 env[1223]: 2024-02-09 18:36:03.121 [INFO][4836] utils.go 188: Calico CNI releasing IP address ContainerID="769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c" Feb 9 18:36:03.151715 env[1223]: 2024-02-09 18:36:03.138 [INFO][4844] ipam_plugin.go 415: Releasing address using handleID ContainerID="769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c" HandleID="k8s-pod-network.769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c" Workload="localhost-k8s-csi--node--driver--tbbzb-eth0" Feb 9 18:36:03.151715 env[1223]: 2024-02-09 18:36:03.138 [INFO][4844] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:36:03.151715 env[1223]: 2024-02-09 18:36:03.138 [INFO][4844] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:36:03.151715 env[1223]: 2024-02-09 18:36:03.147 [WARNING][4844] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c" HandleID="k8s-pod-network.769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c" Workload="localhost-k8s-csi--node--driver--tbbzb-eth0" Feb 9 18:36:03.151715 env[1223]: 2024-02-09 18:36:03.147 [INFO][4844] ipam_plugin.go 443: Releasing address using workloadID ContainerID="769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c" HandleID="k8s-pod-network.769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c" Workload="localhost-k8s-csi--node--driver--tbbzb-eth0" Feb 9 18:36:03.151715 env[1223]: 2024-02-09 18:36:03.148 [INFO][4844] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:36:03.151715 env[1223]: 2024-02-09 18:36:03.150 [INFO][4836] k8s.go 591: Teardown processing complete. ContainerID="769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c" Feb 9 18:36:03.152204 env[1223]: time="2024-02-09T18:36:03.151746860Z" level=info msg="TearDown network for sandbox \"769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c\" successfully" Feb 9 18:36:03.158859 env[1223]: time="2024-02-09T18:36:03.158806071Z" level=info msg="RemovePodSandbox \"769713d0716c35d710d94e0fe4badcc0118eba1f1070e698f285da44d833015c\" returns successfully" Feb 9 18:36:03.159370 env[1223]: time="2024-02-09T18:36:03.159325955Z" level=info msg="StopPodSandbox for \"866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328\"" Feb 9 18:36:03.230886 env[1223]: 2024-02-09 18:36:03.189 [WARNING][4867] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--qbgfk-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"09bc0638-8c0c-4129-8ff7-de2aae58b31e", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 35, 17, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"33a1c014fd09c2318ae73dd4b67d7c7da9fbac689a7ece3d3a0144f0ab5fb229", Pod:"coredns-787d4945fb-qbgfk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali303a35b8668", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:36:03.230886 env[1223]: 2024-02-09 18:36:03.189 [INFO][4867] k8s.go 578: Cleaning up netns ContainerID="866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328" Feb 9 18:36:03.230886 env[1223]: 2024-02-09 18:36:03.189 [INFO][4867] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328" iface="eth0" netns="" Feb 9 18:36:03.230886 env[1223]: 2024-02-09 18:36:03.189 [INFO][4867] k8s.go 585: Releasing IP address(es) ContainerID="866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328" Feb 9 18:36:03.230886 env[1223]: 2024-02-09 18:36:03.189 [INFO][4867] utils.go 188: Calico CNI releasing IP address ContainerID="866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328" Feb 9 18:36:03.230886 env[1223]: 2024-02-09 18:36:03.214 [INFO][4875] ipam_plugin.go 415: Releasing address using handleID ContainerID="866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328" HandleID="k8s-pod-network.866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328" Workload="localhost-k8s-coredns--787d4945fb--qbgfk-eth0" Feb 9 18:36:03.230886 env[1223]: 2024-02-09 18:36:03.215 [INFO][4875] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:36:03.230886 env[1223]: 2024-02-09 18:36:03.215 [INFO][4875] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:36:03.230886 env[1223]: 2024-02-09 18:36:03.226 [WARNING][4875] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328" HandleID="k8s-pod-network.866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328" Workload="localhost-k8s-coredns--787d4945fb--qbgfk-eth0" Feb 9 18:36:03.230886 env[1223]: 2024-02-09 18:36:03.226 [INFO][4875] ipam_plugin.go 443: Releasing address using workloadID ContainerID="866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328" HandleID="k8s-pod-network.866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328" Workload="localhost-k8s-coredns--787d4945fb--qbgfk-eth0" Feb 9 18:36:03.230886 env[1223]: 2024-02-09 18:36:03.227 [INFO][4875] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:36:03.230886 env[1223]: 2024-02-09 18:36:03.229 [INFO][4867] k8s.go 591: Teardown processing complete. ContainerID="866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328" Feb 9 18:36:03.230886 env[1223]: time="2024-02-09T18:36:03.230827588Z" level=info msg="TearDown network for sandbox \"866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328\" successfully" Feb 9 18:36:03.230886 env[1223]: time="2024-02-09T18:36:03.230859109Z" level=info msg="StopPodSandbox for \"866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328\" returns successfully" Feb 9 18:36:03.231548 env[1223]: time="2024-02-09T18:36:03.231518833Z" level=info msg="RemovePodSandbox for \"866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328\"" Feb 9 18:36:03.231670 env[1223]: time="2024-02-09T18:36:03.231630194Z" level=info msg="Forcibly stopping sandbox \"866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328\"" Feb 9 18:36:03.301544 env[1223]: 2024-02-09 18:36:03.268 [WARNING][4898] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--qbgfk-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"09bc0638-8c0c-4129-8ff7-de2aae58b31e", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 35, 17, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"33a1c014fd09c2318ae73dd4b67d7c7da9fbac689a7ece3d3a0144f0ab5fb229", Pod:"coredns-787d4945fb-qbgfk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali303a35b8668", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:36:03.301544 env[1223]: 2024-02-09 18:36:03.269 [INFO][4898] k8s.go 578: Cleaning up netns ContainerID="866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328" Feb 9 18:36:03.301544 env[1223]: 2024-02-09 18:36:03.269 [INFO][4898] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328" iface="eth0" netns="" Feb 9 18:36:03.301544 env[1223]: 2024-02-09 18:36:03.269 [INFO][4898] k8s.go 585: Releasing IP address(es) ContainerID="866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328" Feb 9 18:36:03.301544 env[1223]: 2024-02-09 18:36:03.269 [INFO][4898] utils.go 188: Calico CNI releasing IP address ContainerID="866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328" Feb 9 18:36:03.301544 env[1223]: 2024-02-09 18:36:03.286 [INFO][4905] ipam_plugin.go 415: Releasing address using handleID ContainerID="866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328" HandleID="k8s-pod-network.866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328" Workload="localhost-k8s-coredns--787d4945fb--qbgfk-eth0" Feb 9 18:36:03.301544 env[1223]: 2024-02-09 18:36:03.286 [INFO][4905] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:36:03.301544 env[1223]: 2024-02-09 18:36:03.286 [INFO][4905] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:36:03.301544 env[1223]: 2024-02-09 18:36:03.296 [WARNING][4905] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328" HandleID="k8s-pod-network.866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328" Workload="localhost-k8s-coredns--787d4945fb--qbgfk-eth0" Feb 9 18:36:03.301544 env[1223]: 2024-02-09 18:36:03.296 [INFO][4905] ipam_plugin.go 443: Releasing address using workloadID ContainerID="866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328" HandleID="k8s-pod-network.866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328" Workload="localhost-k8s-coredns--787d4945fb--qbgfk-eth0" Feb 9 18:36:03.301544 env[1223]: 2024-02-09 18:36:03.298 [INFO][4905] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:36:03.301544 env[1223]: 2024-02-09 18:36:03.299 [INFO][4898] k8s.go 591: Teardown processing complete. ContainerID="866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328" Feb 9 18:36:03.302202 env[1223]: time="2024-02-09T18:36:03.302149181Z" level=info msg="TearDown network for sandbox \"866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328\" successfully" Feb 9 18:36:03.305036 env[1223]: time="2024-02-09T18:36:03.305006321Z" level=info msg="RemovePodSandbox \"866edcda1bcfaba73656cd21a8239a64de921e0120ea1371734cac23b2d01328\" returns successfully" Feb 9 18:36:03.305546 env[1223]: time="2024-02-09T18:36:03.305519725Z" level=info msg="StopPodSandbox for \"976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd\"" Feb 9 18:36:03.370473 env[1223]: 2024-02-09 18:36:03.340 [WARNING][4929] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--6v8j8-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"503ecc7d-dfef-4edc-be40-46d8e27281f8", ResourceVersion:"741", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 35, 17, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"016c0224d42942aa7598c729190f98fad640381e0f018f97aef1ab4d3c298a46", Pod:"coredns-787d4945fb-6v8j8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali300d05296d6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:36:03.370473 env[1223]: 2024-02-09 18:36:03.340 [INFO][4929] k8s.go 578: Cleaning up netns ContainerID="976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd" Feb 9 18:36:03.370473 env[1223]: 2024-02-09 18:36:03.340 [INFO][4929] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd" iface="eth0" netns="" Feb 9 18:36:03.370473 env[1223]: 2024-02-09 18:36:03.340 [INFO][4929] k8s.go 585: Releasing IP address(es) ContainerID="976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd" Feb 9 18:36:03.370473 env[1223]: 2024-02-09 18:36:03.340 [INFO][4929] utils.go 188: Calico CNI releasing IP address ContainerID="976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd" Feb 9 18:36:03.370473 env[1223]: 2024-02-09 18:36:03.356 [INFO][4937] ipam_plugin.go 415: Releasing address using handleID ContainerID="976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd" HandleID="k8s-pod-network.976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd" Workload="localhost-k8s-coredns--787d4945fb--6v8j8-eth0" Feb 9 18:36:03.370473 env[1223]: 2024-02-09 18:36:03.357 [INFO][4937] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:36:03.370473 env[1223]: 2024-02-09 18:36:03.357 [INFO][4937] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:36:03.370473 env[1223]: 2024-02-09 18:36:03.366 [WARNING][4937] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd" HandleID="k8s-pod-network.976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd" Workload="localhost-k8s-coredns--787d4945fb--6v8j8-eth0" Feb 9 18:36:03.370473 env[1223]: 2024-02-09 18:36:03.366 [INFO][4937] ipam_plugin.go 443: Releasing address using workloadID ContainerID="976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd" HandleID="k8s-pod-network.976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd" Workload="localhost-k8s-coredns--787d4945fb--6v8j8-eth0" Feb 9 18:36:03.370473 env[1223]: 2024-02-09 18:36:03.367 [INFO][4937] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:36:03.370473 env[1223]: 2024-02-09 18:36:03.368 [INFO][4929] k8s.go 591: Teardown processing complete. ContainerID="976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd" Feb 9 18:36:03.370895 env[1223]: time="2024-02-09T18:36:03.370507552Z" level=info msg="TearDown network for sandbox \"976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd\" successfully" Feb 9 18:36:03.370895 env[1223]: time="2024-02-09T18:36:03.370546472Z" level=info msg="StopPodSandbox for \"976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd\" returns successfully" Feb 9 18:36:03.370971 env[1223]: time="2024-02-09T18:36:03.370944395Z" level=info msg="RemovePodSandbox for \"976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd\"" Feb 9 18:36:03.371015 env[1223]: time="2024-02-09T18:36:03.370978155Z" level=info msg="Forcibly stopping sandbox \"976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd\"" Feb 9 18:36:03.436200 env[1223]: 2024-02-09 18:36:03.406 [WARNING][4959] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--6v8j8-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"503ecc7d-dfef-4edc-be40-46d8e27281f8", ResourceVersion:"741", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 35, 17, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"016c0224d42942aa7598c729190f98fad640381e0f018f97aef1ab4d3c298a46", Pod:"coredns-787d4945fb-6v8j8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali300d05296d6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:36:03.436200 env[1223]: 2024-02-09 18:36:03.406 [INFO][4959] k8s.go 578: Cleaning up netns ContainerID="976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd" Feb 9 18:36:03.436200 env[1223]: 2024-02-09 18:36:03.406 [INFO][4959] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd" iface="eth0" netns="" Feb 9 18:36:03.436200 env[1223]: 2024-02-09 18:36:03.406 [INFO][4959] k8s.go 585: Releasing IP address(es) ContainerID="976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd" Feb 9 18:36:03.436200 env[1223]: 2024-02-09 18:36:03.406 [INFO][4959] utils.go 188: Calico CNI releasing IP address ContainerID="976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd" Feb 9 18:36:03.436200 env[1223]: 2024-02-09 18:36:03.422 [INFO][4966] ipam_plugin.go 415: Releasing address using handleID ContainerID="976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd" HandleID="k8s-pod-network.976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd" Workload="localhost-k8s-coredns--787d4945fb--6v8j8-eth0" Feb 9 18:36:03.436200 env[1223]: 2024-02-09 18:36:03.422 [INFO][4966] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:36:03.436200 env[1223]: 2024-02-09 18:36:03.422 [INFO][4966] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:36:03.436200 env[1223]: 2024-02-09 18:36:03.431 [WARNING][4966] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd" HandleID="k8s-pod-network.976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd" Workload="localhost-k8s-coredns--787d4945fb--6v8j8-eth0" Feb 9 18:36:03.436200 env[1223]: 2024-02-09 18:36:03.431 [INFO][4966] ipam_plugin.go 443: Releasing address using workloadID ContainerID="976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd" HandleID="k8s-pod-network.976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd" Workload="localhost-k8s-coredns--787d4945fb--6v8j8-eth0" Feb 9 18:36:03.436200 env[1223]: 2024-02-09 18:36:03.433 [INFO][4966] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:36:03.436200 env[1223]: 2024-02-09 18:36:03.434 [INFO][4959] k8s.go 591: Teardown processing complete. ContainerID="976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd" Feb 9 18:36:03.436651 env[1223]: time="2024-02-09T18:36:03.436222784Z" level=info msg="TearDown network for sandbox \"976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd\" successfully" Feb 9 18:36:03.438976 env[1223]: time="2024-02-09T18:36:03.438945964Z" level=info msg="RemovePodSandbox \"976407fc6d8e9b86c0399c28cf1e16b74ef1eb967381036d307f72ef7761f1fd\" returns successfully" Feb 9 18:36:03.439429 env[1223]: time="2024-02-09T18:36:03.439405687Z" level=info msg="StopPodSandbox for \"45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a\"" Feb 9 18:36:03.539855 env[1223]: 2024-02-09 18:36:03.495 [WARNING][4988] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d59495b99--568dd-eth0", GenerateName:"calico-kube-controllers-6d59495b99-", Namespace:"calico-system", SelfLink:"", UID:"fe06e181-2fd6-42d6-8ad3-fb53734220fc", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 35, 22, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d59495b99", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6b6d7eca5eff49e85ec54039fa11398f54d6ff375a871d1d16939a3cfdff4668", Pod:"calico-kube-controllers-6d59495b99-568dd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7da199293f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:36:03.539855 env[1223]: 2024-02-09 18:36:03.498 [INFO][4988] k8s.go 578: Cleaning up netns ContainerID="45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a" Feb 9 18:36:03.539855 env[1223]: 2024-02-09 18:36:03.503 [INFO][4988] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a" iface="eth0" netns="" Feb 9 18:36:03.539855 env[1223]: 2024-02-09 18:36:03.503 [INFO][4988] k8s.go 585: Releasing IP address(es) ContainerID="45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a" Feb 9 18:36:03.539855 env[1223]: 2024-02-09 18:36:03.503 [INFO][4988] utils.go 188: Calico CNI releasing IP address ContainerID="45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a" Feb 9 18:36:03.539855 env[1223]: 2024-02-09 18:36:03.521 [INFO][4996] ipam_plugin.go 415: Releasing address using handleID ContainerID="45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a" HandleID="k8s-pod-network.45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a" Workload="localhost-k8s-calico--kube--controllers--6d59495b99--568dd-eth0" Feb 9 18:36:03.539855 env[1223]: 2024-02-09 18:36:03.521 [INFO][4996] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:36:03.539855 env[1223]: 2024-02-09 18:36:03.521 [INFO][4996] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:36:03.539855 env[1223]: 2024-02-09 18:36:03.530 [WARNING][4996] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a" HandleID="k8s-pod-network.45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a" Workload="localhost-k8s-calico--kube--controllers--6d59495b99--568dd-eth0" Feb 9 18:36:03.539855 env[1223]: 2024-02-09 18:36:03.530 [INFO][4996] ipam_plugin.go 443: Releasing address using workloadID ContainerID="45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a" HandleID="k8s-pod-network.45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a" Workload="localhost-k8s-calico--kube--controllers--6d59495b99--568dd-eth0" Feb 9 18:36:03.539855 env[1223]: 2024-02-09 18:36:03.532 [INFO][4996] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:36:03.539855 env[1223]: 2024-02-09 18:36:03.537 [INFO][4988] k8s.go 591: Teardown processing complete. ContainerID="45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a" Feb 9 18:36:03.539855 env[1223]: time="2024-02-09T18:36:03.539663807Z" level=info msg="TearDown network for sandbox \"45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a\" successfully" Feb 9 18:36:03.539855 env[1223]: time="2024-02-09T18:36:03.539692008Z" level=info msg="StopPodSandbox for \"45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a\" returns successfully" Feb 9 18:36:03.540979 env[1223]: time="2024-02-09T18:36:03.540656975Z" level=info msg="RemovePodSandbox for \"45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a\"" Feb 9 18:36:03.540979 env[1223]: time="2024-02-09T18:36:03.540726935Z" level=info msg="Forcibly stopping sandbox \"45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a\"" Feb 9 18:36:03.604119 env[1223]: 2024-02-09 18:36:03.573 [WARNING][5021] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d59495b99--568dd-eth0", GenerateName:"calico-kube-controllers-6d59495b99-", Namespace:"calico-system", SelfLink:"", UID:"fe06e181-2fd6-42d6-8ad3-fb53734220fc", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 35, 22, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d59495b99", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6b6d7eca5eff49e85ec54039fa11398f54d6ff375a871d1d16939a3cfdff4668", Pod:"calico-kube-controllers-6d59495b99-568dd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7da199293f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:36:03.604119 env[1223]: 2024-02-09 18:36:03.573 [INFO][5021] k8s.go 578: Cleaning up netns ContainerID="45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a" Feb 9 18:36:03.604119 env[1223]: 2024-02-09 18:36:03.573 [INFO][5021] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a" iface="eth0" netns="" Feb 9 18:36:03.604119 env[1223]: 2024-02-09 18:36:03.573 [INFO][5021] k8s.go 585: Releasing IP address(es) ContainerID="45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a" Feb 9 18:36:03.604119 env[1223]: 2024-02-09 18:36:03.573 [INFO][5021] utils.go 188: Calico CNI releasing IP address ContainerID="45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a" Feb 9 18:36:03.604119 env[1223]: 2024-02-09 18:36:03.590 [INFO][5029] ipam_plugin.go 415: Releasing address using handleID ContainerID="45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a" HandleID="k8s-pod-network.45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a" Workload="localhost-k8s-calico--kube--controllers--6d59495b99--568dd-eth0" Feb 9 18:36:03.604119 env[1223]: 2024-02-09 18:36:03.590 [INFO][5029] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:36:03.604119 env[1223]: 2024-02-09 18:36:03.590 [INFO][5029] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:36:03.604119 env[1223]: 2024-02-09 18:36:03.599 [WARNING][5029] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a" HandleID="k8s-pod-network.45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a" Workload="localhost-k8s-calico--kube--controllers--6d59495b99--568dd-eth0" Feb 9 18:36:03.604119 env[1223]: 2024-02-09 18:36:03.599 [INFO][5029] ipam_plugin.go 443: Releasing address using workloadID ContainerID="45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a" HandleID="k8s-pod-network.45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a" Workload="localhost-k8s-calico--kube--controllers--6d59495b99--568dd-eth0" Feb 9 18:36:03.604119 env[1223]: 2024-02-09 18:36:03.601 [INFO][5029] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:36:03.604119 env[1223]: 2024-02-09 18:36:03.602 [INFO][5021] k8s.go 591: Teardown processing complete. ContainerID="45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a" Feb 9 18:36:03.604119 env[1223]: time="2024-02-09T18:36:03.604091350Z" level=info msg="TearDown network for sandbox \"45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a\" successfully" Feb 9 18:36:03.607511 env[1223]: time="2024-02-09T18:36:03.607474895Z" level=info msg="RemovePodSandbox \"45e1359fa87baac1550a3d9cbc1c57e31ecf5a628fb0047026e0dcd089bb3b3a\" returns successfully" Feb 9 18:36:05.179368 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 9 18:36:05.179478 kernel: audit: type=1130 audit(1707503765.176:356): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.89:22-10.0.0.1:44666 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:05.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.89:22-10.0.0.1:44666 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:05.176609 systemd[1]: Started sshd@13-10.0.0.89:22-10.0.0.1:44666.service. Feb 9 18:36:05.216000 audit[5036]: USER_ACCT pid=5036 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:05.217504 sshd[5036]: Accepted publickey for core from 10.0.0.1 port 44666 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:36:05.218505 sshd[5036]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:36:05.217000 audit[5036]: CRED_ACQ pid=5036 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:05.222504 kernel: audit: type=1101 audit(1707503765.216:357): pid=5036 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:05.222554 kernel: audit: type=1103 audit(1707503765.217:358): pid=5036 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:05.223769 kernel: audit: type=1006 audit(1707503765.217:359): pid=5036 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Feb 9 18:36:05.223797 kernel: audit: type=1300 audit(1707503765.217:359): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc8913630 a2=3 a3=1 items=0 ppid=1 pid=5036 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:05.217000 audit[5036]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc8913630 a2=3 a3=1 items=0 ppid=1 pid=5036 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:05.222716 systemd-logind[1201]: New session 14 of user core. Feb 9 18:36:05.223570 systemd[1]: Started session-14.scope. Feb 9 18:36:05.217000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 18:36:05.226376 kernel: audit: type=1327 audit(1707503765.217:359): proctitle=737368643A20636F7265205B707269765D Feb 9 18:36:05.226000 audit[5036]: USER_START pid=5036 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:05.228000 audit[5039]: CRED_ACQ pid=5039 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:05.232167 kernel: audit: type=1105 audit(1707503765.226:360): pid=5036 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:05.232211 kernel: audit: type=1103 audit(1707503765.228:361): pid=5039 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:05.331507 sshd[5036]: pam_unix(sshd:session): session closed for user core Feb 9 18:36:05.331000 audit[5036]: USER_END pid=5036 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:05.334204 systemd[1]: sshd@13-10.0.0.89:22-10.0.0.1:44666.service: Deactivated successfully. Feb 9 18:36:05.332000 audit[5036]: CRED_DISP pid=5036 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:05.335294 systemd-logind[1201]: Session 14 logged out. Waiting for processes to exit. Feb 9 18:36:05.335345 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 18:36:05.336053 systemd-logind[1201]: Removed session 14. Feb 9 18:36:05.336990 kernel: audit: type=1106 audit(1707503765.331:362): pid=5036 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:05.337054 kernel: audit: type=1104 audit(1707503765.332:363): pid=5036 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:05.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.89:22-10.0.0.1:44666 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:10.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.89:22-10.0.0.1:44682 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:10.334831 systemd[1]: Started sshd@14-10.0.0.89:22-10.0.0.1:44682.service. Feb 9 18:36:10.336244 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 18:36:10.336455 kernel: audit: type=1130 audit(1707503770.334:365): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.89:22-10.0.0.1:44682 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:10.377000 audit[5051]: USER_ACCT pid=5051 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:10.377853 sshd[5051]: Accepted publickey for core from 10.0.0.1 port 44682 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:36:10.379504 sshd[5051]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:36:10.378000 audit[5051]: CRED_ACQ pid=5051 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:10.382396 kernel: audit: type=1101 audit(1707503770.377:366): pid=5051 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:10.382465 kernel: audit: type=1103 audit(1707503770.378:367): pid=5051 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:10.382483 kernel: audit: type=1006 audit(1707503770.378:368): pid=5051 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Feb 9 18:36:10.383785 kernel: audit: type=1300 audit(1707503770.378:368): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff24dbc60 a2=3 a3=1 items=0 ppid=1 pid=5051 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:10.378000 audit[5051]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff24dbc60 a2=3 a3=1 items=0 ppid=1 pid=5051 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:10.383412 systemd-logind[1201]: New session 15 of user core. Feb 9 18:36:10.383808 systemd[1]: Started session-15.scope. Feb 9 18:36:10.385962 kernel: audit: type=1327 audit(1707503770.378:368): proctitle=737368643A20636F7265205B707269765D Feb 9 18:36:10.378000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 18:36:10.391000 audit[5051]: USER_START pid=5051 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:10.392000 audit[5054]: CRED_ACQ pid=5054 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:10.396471 kernel: audit: type=1105 audit(1707503770.391:369): pid=5051 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:10.396515 kernel: audit: type=1103 audit(1707503770.392:370): pid=5054 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:10.505469 sshd[5051]: pam_unix(sshd:session): session closed for user core Feb 9 18:36:10.506000 audit[5051]: USER_END pid=5051 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:10.508177 systemd[1]: sshd@14-10.0.0.89:22-10.0.0.1:44682.service: Deactivated successfully. Feb 9 18:36:10.506000 audit[5051]: CRED_DISP pid=5051 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:10.509311 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 18:36:10.509312 systemd-logind[1201]: Session 15 logged out. Waiting for processes to exit. Feb 9 18:36:10.510314 systemd-logind[1201]: Removed session 15. Feb 9 18:36:10.511387 kernel: audit: type=1106 audit(1707503770.506:371): pid=5051 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:10.511429 kernel: audit: type=1104 audit(1707503770.506:372): pid=5051 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:10.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.89:22-10.0.0.1:44682 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:14.304043 systemd[1]: run-containerd-runc-k8s.io-d753e69c050c3b2ea477858df693c512adf63d5e84e7be92da1b270b847277dd-runc.FmEaaV.mount: Deactivated successfully. Feb 9 18:36:15.508477 systemd[1]: Started sshd@15-10.0.0.89:22-10.0.0.1:35396.service. Feb 9 18:36:15.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.89:22-10.0.0.1:35396 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:15.511318 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 18:36:15.511377 kernel: audit: type=1130 audit(1707503775.508:374): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.89:22-10.0.0.1:35396 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:15.549000 audit[5094]: USER_ACCT pid=5094 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:15.550409 sshd[5094]: Accepted publickey for core from 10.0.0.1 port 35396 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:36:15.551865 sshd[5094]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:36:15.551000 audit[5094]: CRED_ACQ pid=5094 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:15.554847 kernel: audit: type=1101 audit(1707503775.549:375): pid=5094 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:15.554904 kernel: audit: type=1103 audit(1707503775.551:376): pid=5094 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:15.554923 kernel: audit: type=1006 audit(1707503775.551:377): pid=5094 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Feb 9 18:36:15.555672 systemd-logind[1201]: New session 16 of user core. Feb 9 18:36:15.556124 systemd[1]: Started session-16.scope. Feb 9 18:36:15.556241 kernel: audit: type=1300 audit(1707503775.551:377): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffee14660 a2=3 a3=1 items=0 ppid=1 pid=5094 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:15.551000 audit[5094]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffee14660 a2=3 a3=1 items=0 ppid=1 pid=5094 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:15.551000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 18:36:15.559589 kernel: audit: type=1327 audit(1707503775.551:377): proctitle=737368643A20636F7265205B707269765D Feb 9 18:36:15.564400 kernel: audit: type=1105 audit(1707503775.559:378): pid=5094 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:15.564460 kernel: audit: type=1103 audit(1707503775.560:379): pid=5097 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:15.559000 audit[5094]: USER_START pid=5094 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:15.560000 audit[5097]: CRED_ACQ pid=5097 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:15.666043 sshd[5094]: pam_unix(sshd:session): session closed for user core Feb 9 18:36:15.666000 audit[5094]: USER_END pid=5094 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:15.668547 systemd[1]: sshd@15-10.0.0.89:22-10.0.0.1:35396.service: Deactivated successfully. Feb 9 18:36:15.666000 audit[5094]: CRED_DISP pid=5094 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:15.669656 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 18:36:15.669659 systemd-logind[1201]: Session 16 logged out. Waiting for processes to exit. Feb 9 18:36:15.670709 systemd-logind[1201]: Removed session 16. Feb 9 18:36:15.671755 kernel: audit: type=1106 audit(1707503775.666:380): pid=5094 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:15.671817 kernel: audit: type=1104 audit(1707503775.666:381): pid=5094 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:15.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.89:22-10.0.0.1:35396 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:20.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.89:22-10.0.0.1:35408 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:20.669958 systemd[1]: Started sshd@16-10.0.0.89:22-10.0.0.1:35408.service. Feb 9 18:36:20.670705 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 18:36:20.670813 kernel: audit: type=1130 audit(1707503780.669:383): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.89:22-10.0.0.1:35408 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:20.710396 sshd[5130]: Accepted publickey for core from 10.0.0.1 port 35408 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:36:20.709000 audit[5130]: USER_ACCT pid=5130 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:20.711613 sshd[5130]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:36:20.710000 audit[5130]: CRED_ACQ pid=5130 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:20.714685 kernel: audit: type=1101 audit(1707503780.709:384): pid=5130 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:20.715929 kernel: audit: type=1103 audit(1707503780.710:385): pid=5130 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:20.715948 kernel: audit: type=1006 audit(1707503780.710:386): pid=5130 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Feb 9 18:36:20.710000 audit[5130]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd2ca5fb0 a2=3 a3=1 items=0 ppid=1 pid=5130 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:20.715287 systemd-logind[1201]: New session 17 of user core. Feb 9 18:36:20.715727 systemd[1]: Started session-17.scope. Feb 9 18:36:20.718258 kernel: audit: type=1300 audit(1707503780.710:386): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd2ca5fb0 a2=3 a3=1 items=0 ppid=1 pid=5130 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:20.710000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 18:36:20.718363 kernel: audit: type=1327 audit(1707503780.710:386): proctitle=737368643A20636F7265205B707269765D Feb 9 18:36:20.719000 audit[5130]: USER_START pid=5130 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:20.720000 audit[5133]: CRED_ACQ pid=5133 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:20.724682 kernel: audit: type=1105 audit(1707503780.719:387): pid=5130 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:20.724732 kernel: audit: type=1103 audit(1707503780.720:388): pid=5133 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:20.828817 sshd[5130]: pam_unix(sshd:session): session closed for user core Feb 9 18:36:20.829000 audit[5130]: USER_END pid=5130 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:20.831269 systemd[1]: sshd@16-10.0.0.89:22-10.0.0.1:35408.service: Deactivated successfully. Feb 9 18:36:20.829000 audit[5130]: CRED_DISP pid=5130 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:20.832343 systemd-logind[1201]: Session 17 logged out. Waiting for processes to exit. Feb 9 18:36:20.832430 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 18:36:20.833578 systemd-logind[1201]: Removed session 17. Feb 9 18:36:20.834374 kernel: audit: type=1106 audit(1707503780.829:389): pid=5130 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:20.834423 kernel: audit: type=1104 audit(1707503780.829:390): pid=5130 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:20.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.89:22-10.0.0.1:35408 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:22.087831 kubelet[2169]: E0209 18:36:22.087802 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:36:22.088235 kubelet[2169]: E0209 18:36:22.088213 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:36:25.088498 kubelet[2169]: E0209 18:36:25.088471 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:36:25.831513 systemd[1]: Started sshd@17-10.0.0.89:22-10.0.0.1:58952.service. Feb 9 18:36:25.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.89:22-10.0.0.1:58952 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:25.832403 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 18:36:25.832492 kernel: audit: type=1130 audit(1707503785.831:392): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.89:22-10.0.0.1:58952 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:25.872000 audit[5145]: USER_ACCT pid=5145 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:25.873384 sshd[5145]: Accepted publickey for core from 10.0.0.1 port 58952 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:36:25.875502 sshd[5145]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:36:25.874000 audit[5145]: CRED_ACQ pid=5145 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:25.877803 kernel: audit: type=1101 audit(1707503785.872:393): pid=5145 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:25.877878 kernel: audit: type=1103 audit(1707503785.874:394): pid=5145 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:25.877900 kernel: audit: type=1006 audit(1707503785.874:395): pid=5145 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Feb 9 18:36:25.879242 kernel: audit: type=1300 audit(1707503785.874:395): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff1726bc0 a2=3 a3=1 items=0 ppid=1 pid=5145 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:25.874000 audit[5145]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff1726bc0 a2=3 a3=1 items=0 ppid=1 pid=5145 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:25.879673 systemd-logind[1201]: New session 18 of user core. Feb 9 18:36:25.880185 systemd[1]: Started session-18.scope. Feb 9 18:36:25.874000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 18:36:25.882723 kernel: audit: type=1327 audit(1707503785.874:395): proctitle=737368643A20636F7265205B707269765D Feb 9 18:36:25.884000 audit[5145]: USER_START pid=5145 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:25.886000 audit[5148]: CRED_ACQ pid=5148 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:25.890172 kernel: audit: type=1105 audit(1707503785.884:396): pid=5145 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:25.890219 kernel: audit: type=1103 audit(1707503785.886:397): pid=5148 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:25.999612 sshd[5145]: pam_unix(sshd:session): session closed for user core Feb 9 18:36:25.999000 audit[5145]: USER_END pid=5145 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:26.004496 systemd[1]: sshd@17-10.0.0.89:22-10.0.0.1:58952.service: Deactivated successfully. Feb 9 18:36:26.005509 systemd-logind[1201]: Session 18 logged out. Waiting for processes to exit. Feb 9 18:36:25.999000 audit[5145]: CRED_DISP pid=5145 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:26.006683 systemd[1]: Started sshd@18-10.0.0.89:22-10.0.0.1:58966.service. Feb 9 18:36:26.007022 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 18:36:26.007809 kernel: audit: type=1106 audit(1707503785.999:398): pid=5145 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:26.007876 kernel: audit: type=1104 audit(1707503785.999:399): pid=5145 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:26.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.89:22-10.0.0.1:58952 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:26.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.89:22-10.0.0.1:58966 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:26.009569 systemd-logind[1201]: Removed session 18. Feb 9 18:36:26.048701 sshd[5159]: Accepted publickey for core from 10.0.0.1 port 58966 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:36:26.048000 audit[5159]: USER_ACCT pid=5159 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:26.049000 audit[5159]: CRED_ACQ pid=5159 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:26.049000 audit[5159]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe1b42890 a2=3 a3=1 items=0 ppid=1 pid=5159 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:26.049000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 18:36:26.049871 sshd[5159]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:36:26.053190 systemd-logind[1201]: New session 19 of user core. Feb 9 18:36:26.054015 systemd[1]: Started session-19.scope. Feb 9 18:36:26.056000 audit[5159]: USER_START pid=5159 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:26.058000 audit[5162]: CRED_ACQ pid=5162 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:26.264667 sshd[5159]: pam_unix(sshd:session): session closed for user core Feb 9 18:36:26.266667 systemd[1]: Started sshd@19-10.0.0.89:22-10.0.0.1:58980.service. Feb 9 18:36:26.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.89:22-10.0.0.1:58980 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:26.268000 audit[5159]: USER_END pid=5159 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:26.269000 audit[5159]: CRED_DISP pid=5159 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:26.272340 systemd[1]: sshd@18-10.0.0.89:22-10.0.0.1:58966.service: Deactivated successfully. Feb 9 18:36:26.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.89:22-10.0.0.1:58966 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:26.274102 systemd-logind[1201]: Session 19 logged out. Waiting for processes to exit. Feb 9 18:36:26.274181 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 18:36:26.277527 systemd-logind[1201]: Removed session 19. Feb 9 18:36:26.318000 audit[5170]: USER_ACCT pid=5170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:26.318706 sshd[5170]: Accepted publickey for core from 10.0.0.1 port 58980 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:36:26.319000 audit[5170]: CRED_ACQ pid=5170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:26.319000 audit[5170]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff511abd0 a2=3 a3=1 items=0 ppid=1 pid=5170 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:26.319000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 18:36:26.320190 sshd[5170]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:36:26.324794 systemd[1]: Started session-20.scope. Feb 9 18:36:26.325154 systemd-logind[1201]: New session 20 of user core. Feb 9 18:36:26.329000 audit[5170]: USER_START pid=5170 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:26.330000 audit[5175]: CRED_ACQ pid=5175 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:26.464430 systemd[1]: run-containerd-runc-k8s.io-ff74826e677b5e6a7386f16c2ab84443ed7382c9c98cc76278fd2a0ed8f4d539-runc.5BZb4C.mount: Deactivated successfully. Feb 9 18:36:27.179615 sshd[5170]: pam_unix(sshd:session): session closed for user core Feb 9 18:36:27.182806 systemd[1]: Started sshd@20-10.0.0.89:22-10.0.0.1:58994.service. Feb 9 18:36:27.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.89:22-10.0.0.1:58994 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:27.183000 audit[5170]: USER_END pid=5170 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:27.183000 audit[5170]: CRED_DISP pid=5170 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:27.185587 systemd[1]: sshd@19-10.0.0.89:22-10.0.0.1:58980.service: Deactivated successfully. Feb 9 18:36:27.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.89:22-10.0.0.1:58980 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:27.186726 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 18:36:27.187081 systemd-logind[1201]: Session 20 logged out. Waiting for processes to exit. Feb 9 18:36:27.187724 systemd-logind[1201]: Removed session 20. Feb 9 18:36:27.224000 audit[5240]: NETFILTER_CFG table=filter:119 family=2 entries=18 op=nft_register_rule pid=5240 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:36:27.224000 audit[5240]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10364 a0=3 a1=ffffe6af5f80 a2=0 a3=ffff9a9476c0 items=0 ppid=2354 pid=5240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:27.224000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:36:27.228000 audit[5219]: USER_ACCT pid=5219 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:27.228460 sshd[5219]: Accepted publickey for core from 10.0.0.1 port 58994 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:36:27.228000 audit[5219]: CRED_ACQ pid=5219 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:27.228000 audit[5219]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd85e02f0 a2=3 a3=1 items=0 ppid=1 pid=5219 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:27.228000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 18:36:27.229623 sshd[5219]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:36:27.225000 audit[5240]: NETFILTER_CFG table=nat:120 family=2 entries=78 op=nft_register_rule pid=5240 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:36:27.225000 audit[5240]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffe6af5f80 a2=0 a3=ffff9a9476c0 items=0 ppid=2354 pid=5240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:27.225000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:36:27.233812 systemd[1]: Started session-21.scope. Feb 9 18:36:27.233992 systemd-logind[1201]: New session 21 of user core. Feb 9 18:36:27.237000 audit[5219]: USER_START pid=5219 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:27.239000 audit[5246]: CRED_ACQ pid=5246 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:27.265000 audit[5268]: NETFILTER_CFG table=filter:121 family=2 entries=30 op=nft_register_rule pid=5268 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:36:27.265000 audit[5268]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10364 a0=3 a1=ffffe0cacb90 a2=0 a3=ffffae1546c0 items=0 ppid=2354 pid=5268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:27.265000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:36:27.267000 audit[5268]: NETFILTER_CFG table=nat:122 family=2 entries=78 op=nft_register_rule pid=5268 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:36:27.267000 audit[5268]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffe0cacb90 a2=0 a3=ffffae1546c0 items=0 ppid=2354 pid=5268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:27.267000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:36:27.441673 sshd[5219]: pam_unix(sshd:session): session closed for user core Feb 9 18:36:27.448145 systemd[1]: Started sshd@21-10.0.0.89:22-10.0.0.1:59000.service. Feb 9 18:36:27.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.89:22-10.0.0.1:59000 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:27.448000 audit[5219]: USER_END pid=5219 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:27.448000 audit[5219]: CRED_DISP pid=5219 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:27.452413 systemd[1]: sshd@20-10.0.0.89:22-10.0.0.1:58994.service: Deactivated successfully. Feb 9 18:36:27.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.89:22-10.0.0.1:58994 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:27.453212 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 18:36:27.454142 systemd-logind[1201]: Session 21 logged out. Waiting for processes to exit. Feb 9 18:36:27.455693 systemd-logind[1201]: Removed session 21. Feb 9 18:36:27.498000 audit[5276]: USER_ACCT pid=5276 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:27.498817 sshd[5276]: Accepted publickey for core from 10.0.0.1 port 59000 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:36:27.499000 audit[5276]: CRED_ACQ pid=5276 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:27.499000 audit[5276]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd1063730 a2=3 a3=1 items=0 ppid=1 pid=5276 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:27.499000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 18:36:27.500535 sshd[5276]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:36:27.505717 systemd[1]: Started session-22.scope. Feb 9 18:36:27.506065 systemd-logind[1201]: New session 22 of user core. Feb 9 18:36:27.510000 audit[5276]: USER_START pid=5276 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:27.512000 audit[5281]: CRED_ACQ pid=5281 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:27.628200 sshd[5276]: pam_unix(sshd:session): session closed for user core Feb 9 18:36:27.628000 audit[5276]: USER_END pid=5276 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:27.628000 audit[5276]: CRED_DISP pid=5276 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:27.630936 systemd[1]: sshd@21-10.0.0.89:22-10.0.0.1:59000.service: Deactivated successfully. Feb 9 18:36:27.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.89:22-10.0.0.1:59000 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:27.632239 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 18:36:27.632937 systemd-logind[1201]: Session 22 logged out. Waiting for processes to exit. Feb 9 18:36:27.633735 systemd-logind[1201]: Removed session 22. Feb 9 18:36:31.125167 kubelet[2169]: I0209 18:36:31.125130 2169 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:36:31.154468 kubelet[2169]: I0209 18:36:31.154423 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9xhg\" (UniqueName: \"kubernetes.io/projected/3d0d5fb5-4ded-4def-952a-50770dd688bf-kube-api-access-w9xhg\") pod \"calico-apiserver-7c96ff74b7-cc5qg\" (UID: \"3d0d5fb5-4ded-4def-952a-50770dd688bf\") " pod="calico-apiserver/calico-apiserver-7c96ff74b7-cc5qg" Feb 9 18:36:31.154614 kubelet[2169]: I0209 18:36:31.154483 2169 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3d0d5fb5-4ded-4def-952a-50770dd688bf-calico-apiserver-certs\") pod \"calico-apiserver-7c96ff74b7-cc5qg\" (UID: \"3d0d5fb5-4ded-4def-952a-50770dd688bf\") " pod="calico-apiserver/calico-apiserver-7c96ff74b7-cc5qg" Feb 9 18:36:31.175000 audit[5318]: NETFILTER_CFG table=filter:123 family=2 entries=31 op=nft_register_rule pid=5318 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:36:31.176550 kernel: kauditd_printk_skb: 57 callbacks suppressed Feb 9 18:36:31.176617 kernel: audit: type=1325 audit(1707503791.175:441): table=filter:123 family=2 entries=31 op=nft_register_rule pid=5318 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:36:31.175000 audit[5318]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11068 a0=3 a1=fffffd912f50 a2=0 a3=ffff8da266c0 items=0 ppid=2354 pid=5318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:31.180980 kernel: audit: type=1300 audit(1707503791.175:441): arch=c00000b7 syscall=211 success=yes exit=11068 a0=3 a1=fffffd912f50 a2=0 a3=ffff8da266c0 items=0 ppid=2354 pid=5318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:31.175000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:36:31.182461 kernel: audit: type=1327 audit(1707503791.175:441): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:36:31.183000 audit[5318]: NETFILTER_CFG table=nat:124 family=2 entries=78 op=nft_register_rule pid=5318 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:36:31.183000 audit[5318]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=fffffd912f50 a2=0 a3=ffff8da266c0 items=0 ppid=2354 pid=5318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:31.194827 kernel: audit: type=1325 audit(1707503791.183:442): table=nat:124 family=2 entries=78 op=nft_register_rule pid=5318 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:36:31.194891 kernel: audit: type=1300 audit(1707503791.183:442): arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=fffffd912f50 a2=0 a3=ffff8da266c0 items=0 ppid=2354 pid=5318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:31.194913 kernel: audit: type=1327 audit(1707503791.183:442): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:36:31.183000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:36:31.226000 audit[5344]: NETFILTER_CFG table=filter:125 family=2 entries=32 op=nft_register_rule pid=5344 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:36:31.226000 audit[5344]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11068 a0=3 a1=ffffca8f51c0 a2=0 a3=ffffa8c366c0 items=0 ppid=2354 pid=5344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:31.232153 kernel: audit: type=1325 audit(1707503791.226:443): table=filter:125 family=2 entries=32 op=nft_register_rule pid=5344 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:36:31.232216 kernel: audit: type=1300 audit(1707503791.226:443): arch=c00000b7 syscall=211 success=yes exit=11068 a0=3 a1=ffffca8f51c0 a2=0 a3=ffffa8c366c0 items=0 ppid=2354 pid=5344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:31.232253 kernel: audit: type=1327 audit(1707503791.226:443): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:36:31.226000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:36:31.229000 audit[5344]: NETFILTER_CFG table=nat:126 family=2 entries=78 op=nft_register_rule pid=5344 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:36:31.229000 audit[5344]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffca8f51c0 a2=0 a3=ffffa8c366c0 items=0 ppid=2354 pid=5344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:31.229000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:36:31.239387 kernel: audit: type=1325 audit(1707503791.229:444): table=nat:126 family=2 entries=78 op=nft_register_rule pid=5344 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:36:31.255137 kubelet[2169]: E0209 18:36:31.255106 2169 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Feb 9 18:36:31.255420 kubelet[2169]: E0209 18:36:31.255402 2169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d0d5fb5-4ded-4def-952a-50770dd688bf-calico-apiserver-certs podName:3d0d5fb5-4ded-4def-952a-50770dd688bf nodeName:}" failed. No retries permitted until 2024-02-09 18:36:31.75518056 +0000 UTC m=+88.805386007 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/3d0d5fb5-4ded-4def-952a-50770dd688bf-calico-apiserver-certs") pod "calico-apiserver-7c96ff74b7-cc5qg" (UID: "3d0d5fb5-4ded-4def-952a-50770dd688bf") : secret "calico-apiserver-certs" not found Feb 9 18:36:32.030132 env[1223]: time="2024-02-09T18:36:32.030063134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c96ff74b7-cc5qg,Uid:3d0d5fb5-4ded-4def-952a-50770dd688bf,Namespace:calico-apiserver,Attempt:0,}" Feb 9 18:36:32.144916 systemd-networkd[1099]: calif1381c8984e: Link UP Feb 9 18:36:32.146421 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 18:36:32.146487 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif1381c8984e: link becomes ready Feb 9 18:36:32.146335 systemd-networkd[1099]: calif1381c8984e: Gained carrier Feb 9 18:36:32.161487 env[1223]: 2024-02-09 18:36:32.072 [INFO][5347] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7c96ff74b7--cc5qg-eth0 calico-apiserver-7c96ff74b7- calico-apiserver 3d0d5fb5-4ded-4def-952a-50770dd688bf 1081 0 2024-02-09 18:36:31 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c96ff74b7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7c96ff74b7-cc5qg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif1381c8984e [] []}} ContainerID="ab021ba9386668f1d4d95ba05e1db19a96ecb64e5b5127eb87df63a7fcd60110" Namespace="calico-apiserver" Pod="calico-apiserver-7c96ff74b7-cc5qg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c96ff74b7--cc5qg-" Feb 9 18:36:32.161487 env[1223]: 2024-02-09 18:36:32.072 [INFO][5347] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="ab021ba9386668f1d4d95ba05e1db19a96ecb64e5b5127eb87df63a7fcd60110" Namespace="calico-apiserver" Pod="calico-apiserver-7c96ff74b7-cc5qg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c96ff74b7--cc5qg-eth0" Feb 9 18:36:32.161487 env[1223]: 2024-02-09 18:36:32.099 [INFO][5362] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ab021ba9386668f1d4d95ba05e1db19a96ecb64e5b5127eb87df63a7fcd60110" HandleID="k8s-pod-network.ab021ba9386668f1d4d95ba05e1db19a96ecb64e5b5127eb87df63a7fcd60110" Workload="localhost-k8s-calico--apiserver--7c96ff74b7--cc5qg-eth0" Feb 9 18:36:32.161487 env[1223]: 2024-02-09 18:36:32.119 [INFO][5362] ipam_plugin.go 268: Auto assigning IP ContainerID="ab021ba9386668f1d4d95ba05e1db19a96ecb64e5b5127eb87df63a7fcd60110" HandleID="k8s-pod-network.ab021ba9386668f1d4d95ba05e1db19a96ecb64e5b5127eb87df63a7fcd60110" Workload="localhost-k8s-calico--apiserver--7c96ff74b7--cc5qg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004b4aa0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7c96ff74b7-cc5qg", "timestamp":"2024-02-09 18:36:32.099962457 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 18:36:32.161487 env[1223]: 2024-02-09 18:36:32.119 [INFO][5362] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:36:32.161487 env[1223]: 2024-02-09 18:36:32.120 [INFO][5362] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:36:32.161487 env[1223]: 2024-02-09 18:36:32.120 [INFO][5362] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 9 18:36:32.161487 env[1223]: 2024-02-09 18:36:32.121 [INFO][5362] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ab021ba9386668f1d4d95ba05e1db19a96ecb64e5b5127eb87df63a7fcd60110" host="localhost" Feb 9 18:36:32.161487 env[1223]: 2024-02-09 18:36:32.125 [INFO][5362] ipam.go 372: Looking up existing affinities for host host="localhost" Feb 9 18:36:32.161487 env[1223]: 2024-02-09 18:36:32.128 [INFO][5362] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 9 18:36:32.161487 env[1223]: 2024-02-09 18:36:32.130 [INFO][5362] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 9 18:36:32.161487 env[1223]: 2024-02-09 18:36:32.132 [INFO][5362] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 9 18:36:32.161487 env[1223]: 2024-02-09 18:36:32.132 [INFO][5362] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ab021ba9386668f1d4d95ba05e1db19a96ecb64e5b5127eb87df63a7fcd60110" host="localhost" Feb 9 18:36:32.161487 env[1223]: 2024-02-09 18:36:32.134 [INFO][5362] ipam.go 1682: Creating new handle: k8s-pod-network.ab021ba9386668f1d4d95ba05e1db19a96ecb64e5b5127eb87df63a7fcd60110 Feb 9 18:36:32.161487 env[1223]: 2024-02-09 18:36:32.137 [INFO][5362] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ab021ba9386668f1d4d95ba05e1db19a96ecb64e5b5127eb87df63a7fcd60110" host="localhost" Feb 9 18:36:32.161487 env[1223]: 2024-02-09 18:36:32.141 [INFO][5362] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.ab021ba9386668f1d4d95ba05e1db19a96ecb64e5b5127eb87df63a7fcd60110" host="localhost" Feb 9 18:36:32.161487 env[1223]: 2024-02-09 18:36:32.141 [INFO][5362] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.ab021ba9386668f1d4d95ba05e1db19a96ecb64e5b5127eb87df63a7fcd60110" host="localhost" Feb 9 18:36:32.161487 env[1223]: 2024-02-09 18:36:32.141 [INFO][5362] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:36:32.161487 env[1223]: 2024-02-09 18:36:32.141 [INFO][5362] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="ab021ba9386668f1d4d95ba05e1db19a96ecb64e5b5127eb87df63a7fcd60110" HandleID="k8s-pod-network.ab021ba9386668f1d4d95ba05e1db19a96ecb64e5b5127eb87df63a7fcd60110" Workload="localhost-k8s-calico--apiserver--7c96ff74b7--cc5qg-eth0" Feb 9 18:36:32.162033 env[1223]: 2024-02-09 18:36:32.142 [INFO][5347] k8s.go 385: Populated endpoint ContainerID="ab021ba9386668f1d4d95ba05e1db19a96ecb64e5b5127eb87df63a7fcd60110" Namespace="calico-apiserver" Pod="calico-apiserver-7c96ff74b7-cc5qg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c96ff74b7--cc5qg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7c96ff74b7--cc5qg-eth0", GenerateName:"calico-apiserver-7c96ff74b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"3d0d5fb5-4ded-4def-952a-50770dd688bf", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 36, 31, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c96ff74b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7c96ff74b7-cc5qg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif1381c8984e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:36:32.162033 env[1223]: 2024-02-09 18:36:32.143 [INFO][5347] k8s.go 386: Calico CNI using IPs: [192.168.88.133/32] ContainerID="ab021ba9386668f1d4d95ba05e1db19a96ecb64e5b5127eb87df63a7fcd60110" Namespace="calico-apiserver" Pod="calico-apiserver-7c96ff74b7-cc5qg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c96ff74b7--cc5qg-eth0" Feb 9 18:36:32.162033 env[1223]: 2024-02-09 18:36:32.143 [INFO][5347] dataplane_linux.go 68: Setting the host side veth name to calif1381c8984e ContainerID="ab021ba9386668f1d4d95ba05e1db19a96ecb64e5b5127eb87df63a7fcd60110" Namespace="calico-apiserver" Pod="calico-apiserver-7c96ff74b7-cc5qg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c96ff74b7--cc5qg-eth0" Feb 9 18:36:32.162033 env[1223]: 2024-02-09 18:36:32.146 [INFO][5347] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ab021ba9386668f1d4d95ba05e1db19a96ecb64e5b5127eb87df63a7fcd60110" Namespace="calico-apiserver" Pod="calico-apiserver-7c96ff74b7-cc5qg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c96ff74b7--cc5qg-eth0" Feb 9 18:36:32.162033 env[1223]: 2024-02-09 18:36:32.147 [INFO][5347] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="ab021ba9386668f1d4d95ba05e1db19a96ecb64e5b5127eb87df63a7fcd60110" Namespace="calico-apiserver" Pod="calico-apiserver-7c96ff74b7-cc5qg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c96ff74b7--cc5qg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7c96ff74b7--cc5qg-eth0", GenerateName:"calico-apiserver-7c96ff74b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"3d0d5fb5-4ded-4def-952a-50770dd688bf", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 36, 31, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c96ff74b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ab021ba9386668f1d4d95ba05e1db19a96ecb64e5b5127eb87df63a7fcd60110", Pod:"calico-apiserver-7c96ff74b7-cc5qg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif1381c8984e", MAC:"be:b8:61:02:a9:3f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:36:32.162033 env[1223]: 2024-02-09 18:36:32.156 [INFO][5347] k8s.go 491: Wrote updated endpoint to datastore ContainerID="ab021ba9386668f1d4d95ba05e1db19a96ecb64e5b5127eb87df63a7fcd60110" Namespace="calico-apiserver" Pod="calico-apiserver-7c96ff74b7-cc5qg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c96ff74b7--cc5qg-eth0" Feb 9 18:36:32.178467 env[1223]: time="2024-02-09T18:36:32.177618394Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:36:32.178467 env[1223]: time="2024-02-09T18:36:32.177658155Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:36:32.178467 env[1223]: time="2024-02-09T18:36:32.177668075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:36:32.178467 env[1223]: time="2024-02-09T18:36:32.177826236Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ab021ba9386668f1d4d95ba05e1db19a96ecb64e5b5127eb87df63a7fcd60110 pid=5397 runtime=io.containerd.runc.v2 Feb 9 18:36:32.180000 audit[5403]: NETFILTER_CFG table=filter:127 family=2 entries=55 op=nft_register_chain pid=5403 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 18:36:32.180000 audit[5403]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28088 a0=3 a1=ffffc756b980 a2=0 a3=ffff93ff8fa8 items=0 ppid=4552 pid=5403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:32.180000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 18:36:32.213627 systemd-resolved[1156]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 18:36:32.230389 env[1223]: time="2024-02-09T18:36:32.230306079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c96ff74b7-cc5qg,Uid:3d0d5fb5-4ded-4def-952a-50770dd688bf,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ab021ba9386668f1d4d95ba05e1db19a96ecb64e5b5127eb87df63a7fcd60110\"" Feb 9 18:36:32.232314 env[1223]: time="2024-02-09T18:36:32.232274213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\"" Feb 9 18:36:32.274000 audit[5457]: NETFILTER_CFG table=filter:128 family=2 entries=20 op=nft_register_rule pid=5457 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:36:32.274000 audit[5457]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=fffffbe8bcf0 a2=0 a3=ffffb793e6c0 items=0 ppid=2354 pid=5457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:32.274000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:36:32.277000 audit[5457]: NETFILTER_CFG table=nat:129 family=2 entries=162 op=nft_register_chain pid=5457 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:36:32.277000 audit[5457]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=66940 a0=3 a1=fffffbe8bcf0 a2=0 a3=ffffb793e6c0 items=0 ppid=2354 pid=5457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:32.277000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:36:32.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.89:22-10.0.0.1:47692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:32.631264 systemd[1]: Started sshd@22-10.0.0.89:22-10.0.0.1:47692.service. Feb 9 18:36:32.671000 audit[5459]: USER_ACCT pid=5459 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:32.672000 audit[5459]: CRED_ACQ pid=5459 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:32.672000 audit[5459]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdff09bb0 a2=3 a3=1 items=0 ppid=1 pid=5459 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:32.672000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 18:36:32.676530 sshd[5459]: Accepted publickey for core from 10.0.0.1 port 47692 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:36:32.673069 sshd[5459]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:36:32.679815 systemd-logind[1201]: New session 23 of user core. Feb 9 18:36:32.680747 systemd[1]: Started session-23.scope. Feb 9 18:36:32.684000 audit[5459]: USER_START pid=5459 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:32.685000 audit[5468]: CRED_ACQ pid=5468 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:32.784347 sshd[5459]: pam_unix(sshd:session): session closed for user core Feb 9 18:36:32.784000 audit[5459]: USER_END pid=5459 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:32.784000 audit[5459]: CRED_DISP pid=5459 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:32.786640 systemd[1]: sshd@22-10.0.0.89:22-10.0.0.1:47692.service: Deactivated successfully. Feb 9 18:36:32.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.89:22-10.0.0.1:47692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:32.787659 systemd-logind[1201]: Session 23 logged out. Waiting for processes to exit. Feb 9 18:36:32.787723 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 18:36:32.788505 systemd-logind[1201]: Removed session 23. Feb 9 18:36:33.259488 systemd-networkd[1099]: calif1381c8984e: Gained IPv6LL Feb 9 18:36:34.402984 env[1223]: time="2024-02-09T18:36:34.402922202Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:34.404371 env[1223]: time="2024-02-09T18:36:34.404329012Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:24494ef6c7de0e2dcf21ad9fb6c94801c53f120443e256a5e1b54eccd57058a9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:34.405996 env[1223]: time="2024-02-09T18:36:34.405967623Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:34.408150 env[1223]: time="2024-02-09T18:36:34.408123718Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:5ff0bdc8d0b2e9d7819703b18867f60f9153ed01da81e2bbfa22002abec9dc26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:34.408894 env[1223]: time="2024-02-09T18:36:34.408868083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\" returns image reference \"sha256:24494ef6c7de0e2dcf21ad9fb6c94801c53f120443e256a5e1b54eccd57058a9\"" Feb 9 18:36:34.412267 env[1223]: time="2024-02-09T18:36:34.412218226Z" level=info msg="CreateContainer within sandbox \"ab021ba9386668f1d4d95ba05e1db19a96ecb64e5b5127eb87df63a7fcd60110\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 9 18:36:34.421088 env[1223]: time="2024-02-09T18:36:34.421050607Z" level=info msg="CreateContainer within sandbox \"ab021ba9386668f1d4d95ba05e1db19a96ecb64e5b5127eb87df63a7fcd60110\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"16d0944291cf3a2b9c209e63390bf7711b15ec58f371ed77b40df27cfb6acfe4\"" Feb 9 18:36:34.422999 env[1223]: time="2024-02-09T18:36:34.422969100Z" level=info msg="StartContainer for \"16d0944291cf3a2b9c209e63390bf7711b15ec58f371ed77b40df27cfb6acfe4\"" Feb 9 18:36:34.441896 systemd[1]: run-containerd-runc-k8s.io-16d0944291cf3a2b9c209e63390bf7711b15ec58f371ed77b40df27cfb6acfe4-runc.o3pzjP.mount: Deactivated successfully. Feb 9 18:36:34.495437 env[1223]: time="2024-02-09T18:36:34.495397559Z" level=info msg="StartContainer for \"16d0944291cf3a2b9c209e63390bf7711b15ec58f371ed77b40df27cfb6acfe4\" returns successfully" Feb 9 18:36:35.044000 audit[5545]: NETFILTER_CFG table=filter:130 family=2 entries=8 op=nft_register_rule pid=5545 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:36:35.044000 audit[5545]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=fffff9332750 a2=0 a3=ffff805586c0 items=0 ppid=2354 pid=5545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:35.044000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:36:35.047000 audit[5545]: NETFILTER_CFG table=nat:131 family=2 entries=198 op=nft_register_rule pid=5545 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:36:35.047000 audit[5545]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=66940 a0=3 a1=fffff9332750 a2=0 a3=ffff805586c0 items=0 ppid=2354 pid=5545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:35.047000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:36:35.326641 kubelet[2169]: I0209 18:36:35.326539 2169 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7c96ff74b7-cc5qg" podStartSLOduration=-9.22337203252827e+09 pod.CreationTimestamp="2024-02-09 18:36:31 +0000 UTC" firstStartedPulling="2024-02-09 18:36:32.231679488 +0000 UTC m=+89.281884935" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:36:35.325612112 +0000 UTC m=+92.375817519" watchObservedRunningTime="2024-02-09 18:36:35.326505719 +0000 UTC m=+92.376711166" Feb 9 18:36:35.379000 audit[5572]: NETFILTER_CFG table=filter:132 family=2 entries=8 op=nft_register_rule pid=5572 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:36:35.379000 audit[5572]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffffbd6d10 a2=0 a3=ffff82f296c0 items=0 ppid=2354 pid=5572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:35.379000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:36:35.382000 audit[5572]: NETFILTER_CFG table=nat:133 family=2 entries=198 op=nft_register_rule pid=5572 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:36:35.382000 audit[5572]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=66940 a0=3 a1=ffffffbd6d10 a2=0 a3=ffff82f296c0 items=0 ppid=2354 pid=5572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:35.382000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:36:37.788091 systemd[1]: Started sshd@23-10.0.0.89:22-10.0.0.1:47706.service. Feb 9 18:36:37.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.89:22-10.0.0.1:47706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:37.791063 kernel: kauditd_printk_skb: 34 callbacks suppressed Feb 9 18:36:37.791147 kernel: audit: type=1130 audit(1707503797.787:461): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.89:22-10.0.0.1:47706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:37.832000 audit[5575]: USER_ACCT pid=5575 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:37.833123 sshd[5575]: Accepted publickey for core from 10.0.0.1 port 47706 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:36:37.835000 audit[5575]: CRED_ACQ pid=5575 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:37.836615 sshd[5575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:36:37.838336 kernel: audit: type=1101 audit(1707503797.832:462): pid=5575 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:37.838416 kernel: audit: type=1103 audit(1707503797.835:463): pid=5575 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:37.838439 kernel: audit: type=1006 audit(1707503797.835:464): pid=5575 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Feb 9 18:36:37.839660 kernel: audit: type=1300 audit(1707503797.835:464): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdb051e10 a2=3 a3=1 items=0 ppid=1 pid=5575 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:37.835000 audit[5575]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdb051e10 a2=3 a3=1 items=0 ppid=1 pid=5575 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:37.841139 systemd[1]: Started session-24.scope. Feb 9 18:36:37.841384 systemd-logind[1201]: New session 24 of user core. Feb 9 18:36:37.835000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 18:36:37.842675 kernel: audit: type=1327 audit(1707503797.835:464): proctitle=737368643A20636F7265205B707269765D Feb 9 18:36:37.844000 audit[5575]: USER_START pid=5575 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:37.846000 audit[5578]: CRED_ACQ pid=5578 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:37.850069 kernel: audit: type=1105 audit(1707503797.844:465): pid=5575 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:37.850104 kernel: audit: type=1103 audit(1707503797.846:466): pid=5578 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:37.952607 sshd[5575]: pam_unix(sshd:session): session closed for user core Feb 9 18:36:37.953000 audit[5575]: USER_END pid=5575 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:37.955292 systemd[1]: sshd@23-10.0.0.89:22-10.0.0.1:47706.service: Deactivated successfully. Feb 9 18:36:37.956245 systemd-logind[1201]: Session 24 logged out. Waiting for processes to exit. Feb 9 18:36:37.956313 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 18:36:37.953000 audit[5575]: CRED_DISP pid=5575 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:37.958959 kernel: audit: type=1106 audit(1707503797.953:467): pid=5575 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:37.959021 kernel: audit: type=1104 audit(1707503797.953:468): pid=5575 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:37.957013 systemd-logind[1201]: Removed session 24. Feb 9 18:36:37.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.89:22-10.0.0.1:47706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:39.088049 kubelet[2169]: E0209 18:36:39.088007 2169 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:36:42.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.89:22-10.0.0.1:35674 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:42.955934 systemd[1]: Started sshd@24-10.0.0.89:22-10.0.0.1:35674.service. Feb 9 18:36:42.957522 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 18:36:42.957560 kernel: audit: type=1130 audit(1707503802.955:470): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.89:22-10.0.0.1:35674 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:42.999000 audit[5592]: USER_ACCT pid=5592 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:43.000698 sshd[5592]: Accepted publickey for core from 10.0.0.1 port 35674 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:36:43.002453 sshd[5592]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:36:43.001000 audit[5592]: CRED_ACQ pid=5592 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:43.005164 kernel: audit: type=1101 audit(1707503802.999:471): pid=5592 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:43.005244 kernel: audit: type=1103 audit(1707503803.001:472): pid=5592 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:43.005264 kernel: audit: type=1006 audit(1707503803.001:473): pid=5592 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Feb 9 18:36:43.001000 audit[5592]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffe412510 a2=3 a3=1 items=0 ppid=1 pid=5592 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:43.008896 kernel: audit: type=1300 audit(1707503803.001:473): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffe412510 a2=3 a3=1 items=0 ppid=1 pid=5592 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:43.008983 kernel: audit: type=1327 audit(1707503803.001:473): proctitle=737368643A20636F7265205B707269765D Feb 9 18:36:43.001000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 18:36:43.013108 systemd-logind[1201]: New session 25 of user core. Feb 9 18:36:43.013339 systemd[1]: Started session-25.scope. Feb 9 18:36:43.016000 audit[5592]: USER_START pid=5592 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:43.018000 audit[5595]: CRED_ACQ pid=5595 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:43.022510 kernel: audit: type=1105 audit(1707503803.016:474): pid=5592 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:43.022604 kernel: audit: type=1103 audit(1707503803.018:475): pid=5595 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:43.124566 sshd[5592]: pam_unix(sshd:session): session closed for user core Feb 9 18:36:43.125000 audit[5592]: USER_END pid=5592 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:43.127402 systemd[1]: sshd@24-10.0.0.89:22-10.0.0.1:35674.service: Deactivated successfully. Feb 9 18:36:43.125000 audit[5592]: CRED_DISP pid=5592 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:43.128860 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 18:36:43.129454 systemd-logind[1201]: Session 25 logged out. Waiting for processes to exit. Feb 9 18:36:43.130167 systemd-logind[1201]: Removed session 25. Feb 9 18:36:43.130428 kernel: audit: type=1106 audit(1707503803.125:476): pid=5592 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:43.130480 kernel: audit: type=1104 audit(1707503803.125:477): pid=5592 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:43.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.89:22-10.0.0.1:35674 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:48.127363 systemd[1]: Started sshd@25-10.0.0.89:22-10.0.0.1:35690.service. Feb 9 18:36:48.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.89:22-10.0.0.1:35690 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:48.128442 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 18:36:48.128516 kernel: audit: type=1130 audit(1707503808.126:479): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.89:22-10.0.0.1:35690 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:48.182000 audit[5628]: USER_ACCT pid=5628 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:48.184704 sshd[5628]: Accepted publickey for core from 10.0.0.1 port 35690 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:36:48.186418 sshd[5628]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:36:48.185000 audit[5628]: CRED_ACQ pid=5628 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:48.189350 kernel: audit: type=1101 audit(1707503808.182:480): pid=5628 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:48.189431 kernel: audit: type=1103 audit(1707503808.185:481): pid=5628 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:48.189459 kernel: audit: type=1006 audit(1707503808.185:482): pid=5628 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Feb 9 18:36:48.190761 kernel: audit: type=1300 audit(1707503808.185:482): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff861b2d0 a2=3 a3=1 items=0 ppid=1 pid=5628 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:48.185000 audit[5628]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff861b2d0 a2=3 a3=1 items=0 ppid=1 pid=5628 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:48.194094 systemd-logind[1201]: New session 26 of user core. Feb 9 18:36:48.195107 systemd[1]: Started session-26.scope. Feb 9 18:36:48.199183 kernel: audit: type=1327 audit(1707503808.185:482): proctitle=737368643A20636F7265205B707269765D Feb 9 18:36:48.185000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 18:36:48.200074 kernel: audit: type=1105 audit(1707503808.198:483): pid=5628 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:48.198000 audit[5628]: USER_START pid=5628 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:48.199000 audit[5631]: CRED_ACQ pid=5631 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:48.206198 kernel: audit: type=1103 audit(1707503808.199:484): pid=5631 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:48.321531 sshd[5628]: pam_unix(sshd:session): session closed for user core Feb 9 18:36:48.321000 audit[5628]: USER_END pid=5628 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:48.324345 systemd-logind[1201]: Session 26 logged out. Waiting for processes to exit. Feb 9 18:36:48.324592 systemd[1]: sshd@25-10.0.0.89:22-10.0.0.1:35690.service: Deactivated successfully. Feb 9 18:36:48.325379 kernel: audit: type=1106 audit(1707503808.321:485): pid=5628 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:48.325443 kernel: audit: type=1104 audit(1707503808.321:486): pid=5628 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:48.321000 audit[5628]: CRED_DISP pid=5628 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:36:48.325534 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 18:36:48.326912 systemd-logind[1201]: Removed session 26. Feb 9 18:36:48.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.89:22-10.0.0.1:35690 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'